|< Free Open Study >|| |
This section discusses several common tools and systems for managing various parts of a computer's environment. These are tools commonly employed by corporate networks for the ease of manageability that they can provide. Some of these tools will seem familiar from the desktop case study in Chapter 14, while others may be new to you. As always, since this is a case study of an actual system, your environment may not be identical to this one.
The first step in configuring your new workstation is to understand exactly what you're going to have to do to it. After all, you can't configure something if you don't know you're supposed to. This section outlines the realities of the network on which this case study resides. It's pretty typical for a corporate environment, so you may find your own environment to be very similar to it; just watch out for potential subtle differences.
The network used in this case study functions like this:
Workstations obtain IP addresses via Dynamic Host Configuration Protocol (DHCP).
Authentication to the system is handled by the standard Network Information Services (NIS+). The NIS domain name is "universal".
Users' home directories and common project-related files are shared across workstations (and servers) using the Network Filesystem (NFS) standard.
The organization's Information Technology Management Services (ITMS) group tracks the status of the system with Tangram Enterprise Solutions' Asset Insight software.
The network support group requires root access to all Unix-like systems on the network via an OpenSSH key.
Not surprisingly, these issues are the topics of the remainder of this section. Each of the subsections that follows provides information on how to configure these aspects of the system, but not how to install them; I assume that by now you can do that, after reading Chapters 7 through 13.
The Dynamic Host Configuration Protocol (DHCP) is used by desktop, workstation, and sometimes server computers to connect to the Internet. You've already seen DHCP come up a few times; notably, the case study of the desktop system in Chapter 14 also uses DHCP. As you've no doubt concluded from this, DHCP is a very useful system. (In fact, even the case study of a network firewall in the next chapter uses DHCP. Not only that, it includes a DHCP server!)
Configuring DHCP is as easy as using Red Hat's netconfig program and activating DHCP. If you prefer to perform that task manually, you can edit the file /etc/sysconfig/network-scripts/ifcfg-eth0 (replacing "eth0" with the actual Ethernet device your system uses, if necessary). This file simply sets three variables: DEVICE, ONBOOT, and BOOTPROTO. The values should be "eth0" (or the identifier of your actual Ethernet device), "yes", and "dhcp", respectively. This should be familiar to you by now.
The first variable sets the Ethernet device and should match the file name (for example, don't use DEVICE=eth1 in the file ifcfg-eth0). The second variable determines whether the card is to be activated when the system boots up or only when the ifup command is used manually (such as with ifup eth0). The BOOTPROTO variable sets the protocol to use, which is DHCP in this (and probably almost every) case. Creating the /etc/sysconfig/network-scripts/ifcfg-eth0 file is really all Red Hat's netconfig program does (though it also modifies a few other files).
The portmapper service (known as portmap) is a fairly old form of a programming technique known as a remote procedure call (RPC). RPC is the term for when a program running on one machine accesses a function that resides on another machine, as if the "remote procedure" were actually part of the local program. The portmap service as an RPC mechanism is largely unused these days, but a few popular programs still need it. Notably, the NIS+ and NFS services discussed in this chapter require portmap in order to function. Therefore, you'll have to install portmap to use those services. Red Hat Linux 7.3 includes an installation of portmap for you, in the package portmap RPM. After installing it, you'll have to enable it using the chkconfig program; see Chapter 4 for details.
It's worth making a quick security-related note. The portmap program has a long history of having security weaknesses. For this reason, it's not unreasonable to suppose that there are additional existing security vulnerabilities that might be exploited in the future. To avoid exposure, you absolutely should not run the portmap service (or any service) unless you really need it. In other words, don't install portmap unless you really do need the NIS and NFS support discussed in this chapter.
The first step to using a system is to log into it. This section describes how the case study system is configured to use the Network Information Services (NIS, or NIS+) for user authentication.
Normally, Unix systems store user information such as home directory, default shell, and password in the file /etc/passwd. However, if one user needs to log into several systems, that information has to be replicated in each system's /etc/passwd file. Having multiple copies of the same information on multiple systems is a management headache. NIS is a solution to this problem.
NIS is a fairly simple scheme, conceptually. Essentially, NIS is just a way of distributing files across a network. This is most useful for configuration files, such as /etc/passwd . In other words, NIS simply shares files across multiple systems, so that the central copy can be maintained, instead of maintaining many copies on many systems. One such file is a passwd file, which is used instead of a local passwd file.
Obviously, there is an overlap between some of these files. For example, if a user account is present in both places, which takes precedence: the system's local /etc/passwd file or the version found in NIS? The file /etc/nsswitch.conf defines this priority. For example, Red Hat Linux's default /etc/nsswitch.conf file has this line, which defines the order for passwords:
passwd: files nisplus nis
The three words after "passwd:" each refer to a different source for that file. That the "files" identifier is first means that the system should consult the local /etc/passwd file before the copies stored in NIS+ ("nisplus") or the older NIS ("nis"). The defaults should work just fine in the majority if cases, but you can alter the order for a given file if you need to. (For example, you could change the order so that NIS account information takes priority over /etc/passwd.) If you look in /etc/nsswitch.conf, you'll actually see a number of other subsystems that can also load information from NIS. You should check the manual page on nsswitch.conf for more information.
In order for this scheme to work, the system's libraries have to be compiled to use NIS as part of the authentication process. (This is also true of any other systems that use files from NIS, such as those listed in /etc/nsswitch.conf.) This is why it's difficult to install support for NIS after the system has been installed, just as it would be difficult to retrofit PAM into the system, as discussed in Chapter 9.
Actually configuring your workstation to use NIS is fairly simple with Red Hat Linux, since it already includes all of the files you need. To get started, you need to install two packages: yp-tools, and ypbind. (NIS was formerly known as "yellow pages" and changed due to a trademark dispute, so some of the files still use the "yp" abbreviation.)
Once you install these packages, you must configure NIS to be able to locate servers. This is generally accomplished by two configuration files:
/etc/sysconfig/network and /etc/yp.conf. The /etc/sysconfig/network file should already exist on your system; /etc/yp.conf will be installed by the ypbind package.
As you'll recall from Chapter 4, the /etc/sysconfig/network file contains values for important variables used by the network configuration scripts. For example, the HOSTNAME variable sets the network host name of the system. The relevant variable for NIS is NISDOMAIN, which sets the NIS domain name.
The NIS domain is simply a logical name for a particular configuration of NIS. A given network can have multiple NIS servers running, each serving a different set of users. These configurations are referred to as domains, and a client has to bind to a specific domain on the network. This is the domain being specified in the NISDOMAIN variable.
Your network administrator will have to tell you the name of the NIS domain you should use. On the case study system, the name of this domain is "universal", and so the line in /etc/sysconfig/network is simply NISDOMAIN=universal.
This file typically has just a single line, which tells the NIS client software where to locate the NIS server. There are three possible ways to create this line, and all three are documented in comments in the /etc/yp.conf file. One way is to simply specify an NIS host name to use as the server for a given domain. Another way is to have the client broadcast over the network to locate an NIS server for the domain. The third way is to hard-code an NIS server, regardless of domain.
You'll therefore need two pieces of information from your network administrator: the NIS domain (which you already have from the previous section) and possibly the host name of the NIS server itself. That, actually, leads to a rather odd subtlety of configuring NIS.
The issue is that the format you use in /etc/yp.conf can vary based on how your network is configured. For example, the case study system has this line:
domain universal server nis01
However, the case study system is actually one of several identical configurations on the network. Or rather, nearly identical. These other systems differ only in their /etc/yp.conf files. Many of them have this line, instead of the preceding line:
domain universal broadcast
The reason for the difference seems to be the router configurations of the network. These systems are located in various parts of the building and are therefore on various subnets of the LAN. This means that the NIS server is accessible via broadcast from some workstations, but it is not accessible from others. When a new system is added, frequently the /etc/yp.conf file will have to be created by trial and error. You may not encounter this problem on your network, but then again you might. Just be sure to try all the possibilities when you attempt to configure your system.
After you've configured the NIS system to be able to communicate with the NIS server, you'll be all set. The NIS system will usually automatically load whatever configuration files are available (such as the passwd user information file) from the NIS server.
NIS provides centralized authentication. The analog of this is the Network Filesystem (NFS) service, which provides centralized data storage. Typically, users' home directories are stored on NFS servers, which are mounted by other computers when users log in. Thus, users' data "follows" them around the network. Sometimes, project-specific shared directories are created to allow users to work on common files more easily. This section describes how NFS is configured for the case study.
Generally, NFS is fairly closely linked to NIS. That is, probably the most convenient way to distribute NFS configuration information is via NIS. (There's a reason why NIS is called "Network Information Services" and not just "User Information Services.") The core NFS functionality on Linux systems is found in the kernel modules that implement NFS. There are two such modules: one is an NFS server and the other is the client. The server module isn't used on the case study (since it isn't an NFS server).
The client kernel module is very straightforward: It's simply a filesystem driver. That is, it implements a filesystem using the standard Linux Virtual Filesystem (VFS) framework, in precisely the same way as the standard Linux ext3 and CD-ROM iso9660 filesystems are implemented. You can mount and unmount NFS filesystems the same way as you would any other filesystem, using "nfs" as the type for the -t option to mount.
In other words, the NIS client authenticates a user using information fetched from the NIS server, and it also obtains information about the user's account, such as the location of the home directory. After that, the kernel simply mounts the user's NFS directory as a normal filesystem.
The implementation of NFS used by Linux is notoriously finicky and has been known to have strange issues with some vendors' NFS implementations (sometimes due to bugs in the vendors' products rather than Linux). If you encounter strange problems that seem to be related to NFS, check your distribution's bug or errata documentation, because you may not be the only victim. Red Hat Linux has been struck by such issues as recently as version 7.3.
The core NFS support is contained in the kernel. However, there are a couple support daemons that are required to use NFS in practice. The first is the portmap service, which I've already discussed. However, there are two additional services that are NFS-specific (and use portmap): the NFS locking daemon and the NFS Network Service Monitor (NSM).
The NFS file-locking daemon is an RPC service using the portmap service. It is used to handle distributed file locking for NFS. File locking prevents two users from accessing a file at the same time, since only one user can "lock" a file at a time. The NFS locking daemon implements this for NFS clients. (As it happens, the problem of distributed file locking is not an easy problem to solve, and this daemon has its limitations. However, it works reasonably well most of the time, at least for desktop systems.) The actual relevant program is /sbin/rpc.lockd, which is contained in the nfs-utils RPM package.
The NFS NSM is implemented by a program known as /sbin/rpc.statd. This program, like the locking daemon, is also an RPC program that relies on portmap. This program is used to monitor the status of the NFS servers, essentially watching for reboots. It's used by the locking daemon to keep everything in sync with the servers. This program is also included in the nfs-utils package.
These two daemons are generally paired; that is, you'll most likely never need one without the other. Thus, the nfs-utils package manages them both from a single SysV service named nfslock. To install this, simply install the nfs-utils package and then use the chkconfig command (discussed in Chapter 4) to enable the newly installed nfslock service for your runlevel (usually 3 or 5). That single service will manage both of the NFS support daemons.
The real question is how directories get mounted. After all, the kernel module may be a filesystem driver, but at some point, a program has to actually mount the user's home directory. Which program handles that?
The answer is the automounter, or autofs service. This is a daemon program that mounts NFS (or other) directories on demand, when they are actually accessed. This is both convenient and efficient. It's convenient because users (and administrators) don't have to manually mount home directories, shared project drives, and so on. It's efficient because mounting these directories on demand means that they don't have to be mounted up front, when the system boots, wasting resources when they're not in use.
In the case of this sample system, the autofs service takes its cue from the information picked up by NIS. (See the "automount" entry in nsswitch.conf.) When a user logs in, for example, the user's home directory will be referenced (such as when the shell attempts to change directories to the user's home directory upon login). The automounter will detect that the directory was referenced and automatically mount it.
There is one more relevant service of which you should be aware: the netfs service. This is a simple SysV init script that Red Hat Linux provides to enable all network filesystems. (This includes not only NFS, but also other protocols, such as SMB and NCP.) This service will mount any configured NFS file systems, but it will not cause any automounted file systems to be mounted. (In this case study, all the NFS filesystems are automounted, so the netfs service doesn't do much, but it's important to know about if your scenario differs—which is quite possible.)
Installing NFS and NIS is frequently very easy, especially with recent versions of Red Hat Linux. However, it's not 100% foolproof, and sometimes things go wrong, especially with more sophisticated cases or with less centralized administration. Additionally, there are quirks in the Linux implementations of NFS in particular that can cause headaches, so always make sure you're using the correct version. (Linux kernels in the 2.4 series include support for NFS versions 2 and 3; make sure you're using the correct version for your network by using the lsmod and modprobe commands.)
The upshot of all this is that configuring NFS is usually very easy once you have NIS properly configured. Essentially, you simply need to install the netfs and autofs services and configure them to start when the system boots, using the chkconfig command described in Chapter 4. However, NFS and NIS are quite complicated systems, and this book doesn't have space to discuss them completely. This section has simply outlined one particular configuration: the one used by the case study system. There are many, many other ways to configure these services. Check with your network administrator for information and assistance.
The configuration of NIS and NFS in this chapter is optimized for the particular network on which the case study system resides. In this case, the NFS and autofs configurations are stored via NIS. However, your environment may use a simpler technique, and you may have to manually configure NFS by modifying the files /etc/auto.master and /etc/fstab.
The /etc/auto.master file is the real configuration for autofs and can be used to automatically mount other devices (such as floppy drives and ZIP drives) through the automounter. (Essentially, a version of the /etc/auto.master file is what is stored in NIS on the case study system.) The /etc/fstab file can be used to configure specific NFS directories to be mounted. For more information, see the manual page for autofs, the Automount how-tos, and the NIS/NIS+ documentation at the Linux Documentation Project's site (http://www.linuxdoc.org) and Red Hat's site (http://www.redhat.com).
One problem administrators have with a large network is simply tracking systems. Frequently, users will upgrade, remove, or add systems to a network, sometimes without the knowledge of the support organization. The support group, of course, can't support what they don't know about, and so this can present a problem.
To help resolve this, many network administrators make use of so-called asset management software. These are programs installed on workstations and sometimes servers that simply provide a way for administrators to track which machines are on a network and what the specifications of those machines are.
These programs are typically commercial software, so source code usually isn't available. Additionally, such programs usually were first written for other operating systems, and they have only recently begun being ported to Linux. Consequently, they sometimes suffer from annoying quirks, such as restrictions on where they can be installed.
If your network support group uses such software, you may or may not have much input on where it is to be installed. However, if you do have input, you should remember that it's just another piece of software like any other. Ideally, you would be provided with an RPM to install, but if that isn't possible, a good installation would be /opt or /usr/local.
The case study system uses a tool known as Asset Insight (produced by Tangram Enterprise Solutions at http://www.tangram.com) as asset-tracking software. The Linux version of this software is a fairly lightweight daemon process that tracks certain aspects of the system, such as system specifications.
The administrators of a network are responsible for keeping that network, as well as the machines on it, running smoothly. They obviously can't accomplish this if they don't have the ability to manage the individual systems on the network. Therefore, the case study system is configured to provide remote root access to the network support group via the SSH server.
The first step, obviously, is to install the OpenSSH server. This has been discussed several times already in this book; a complete discussion is provided in Chapter 8. The case study system uses the OpenSSH packages provided by Red Hat Linux 7.3.
The next step is to add the appropriate key to the ~root/.ssh/authorized_keys file on the case study system. The network support group maintains an SSH key pair. The public key in this pair is installed on all the Unix systems on the network, permitting secure access to those systems to those who have access to the corresponding private key. That private key is controlled by the network support group.
Providing remote root access, therefore, is as easy as copying the public key provided by the support group into the .ssh/authorized_keys file in the root user's home directory; specifically, the file will be /root/.ssh/authorized_keys since /root is the root user's home directory.
Once the key is installed, it's a good idea to verify the key with the support group using some "out of band" technique, such as a simple phone call. This ensures that you're not installing a compromised key sent by an attacker. After you've installed and verified the key, the network support group will have secure administrative access to your workstation.
Now that you've installed some basic tools to allow your workstation to exist peacefully on the network, it's time to start installing the software you need to actually do your work. The next section outlines how to install the development environment used by this case study.
|< Free Open Study >|| |