3.1 Getting the Most from Common Commands


In this section, we consider advanced and administrative uses of familiar Unix commands.

3.1.1 Getting Help

The manual page facility is the quintessentially Unix approach to online help: superficially minimalist, often obscure, but mostly complete. It's also easy to use, once you know your way around it.

Undoubtedly, the basics of the man command are familiar: getting help for a command, specifying a specific section, using -k (or apropos) to search for entries for a specific topic, and so on.

There are a couple of man features that I didn't discover until I'd been working on Unix systems for years (I'd obviously never bothered to run man man). The first is that you can request multiple manual pages within a single man command:

$ man umount fsck newfs

man presents the pages as separate files to the display program, and you can move among them using its normal method (for example, with :n in more).

On FreeBSD, Linux, and Solaris systems, man also has a -a option, which retrieves the specified manual page(s) from every section of the manual. For example, the first command below displays the introductory manual page for every section for which one is available, and the second command displays the manual pages for both the chown command and system call:

$ man -a intro  $ man -a chown

Manual pages are generally located in a predictable location within the filesystem, often /usr/share/man. You can configure the man command to search multiple man directory trees by setting the MANPATH environment variable to the colon-separated list of desired directories.

3.1.1.1 Changing the search order

The man command searches the various manual page sections in a predefined order: commands first, followed by system calls and library functions, and then the other sections (i.e., 1, 6, 8, 2, 3, 4, 5, and 7 for BSD-based schemes). The first manual page matching the one specified on the command line is displayed. In some cases, a different order might make more sense. Many operating systems allow this ordering scheme to be customized via the MANSECTS entry within a configuration file. For example, Solaris allows the search order to be customized via the MANSECTS entry in the /usr/share/man/man.cf configuration file. You specify a list of sections in the order in which you want them to be searched:

MANSECTS=8,1,2,3,4,5,6,7

This ordering brings administrative command sections to the beginning of the list.

Here are the available ordering customization locations for the versions we are considering that offer this feature:

FreeBSD

MANSECT environment variable (colon-separated)

Linux (Red Hat)

MANSECT in /etc/man.config (colon-separated)

Linux (SuSE)

SECTION in /etc/manpath.config (space-separated)

Solaris

MANSECTS in /usr/share/man/man.cf and/or the top level directory of any manual page tree (comma-separated)

3.1.1.2 Setting up man -k

It's probably worth mentioning how to get man -k to work if your system claims to support it, but nothing comes back when you use it. This command (and its alias apropos) uses a data file indexing all available manual pages. The file often must be initially created by the system administrator, and it may also need to be updated from time to time.

On most systems, the command to create the index file is makewhatis, and it must be run by root. The command does not require any arguments except on Solaris systems, where the top-level manual page subdirectory is given:

# makewhatis                  Most systems # makewhat /usr/share/man     Solaris

On AIX, HP-UX, and Tru64, the older catman -w command is used instead.

3.1.2 Piping into grep and awk

As you undoubtedly already know, the grep command searches its input for lines containing a given pattern. Users commonly use grep to search files. What might be new is some of the ways grep is useful in pipes with many administrative commands. For example, if you want to find out about all of a certain user's current processes, pipe the output of the ps command to grep and search for her username:

% ps aux | grep chavez  chavez   8684 89.5  9.627680 5280 ?  R N  85:26 /home/j90/l988  root    10008 10.0  0.8 1408  352 p2 S     0:00 grep chavez  chavez   8679  0.0  1.4 2048  704 ?  I N   0:00 -csh (csh)  chavez   8681  0.0  1.3 2016  672 ?  I N   0:00 /usr/nqs/sc1  chavez   8683  0.0  1.3 2016  672 ?  I N   0:00 csh -cb rj90  chavez   8682  0.0  2.6 1984 1376 ?  I N   0:00 j90

This example uses the BSD version of ps, using the options that list every single process on the system,[1] and then uses grep to pick out the ones belonging to user chavez.If you'd like the header line from ps included as well, use a command like:

[1] Under HP-UX and for Solaris' /usr/bin/ps, the corresponding command is ps -ef.

% ps -aux | egrep 'chavez|PID'

Now that's a lot to type every time, but you could define an alias if your shell supports them. For example, in the C shell you could use this one:

% alias pu "ps -aux | egrep '\!:1|PID'"  % pu chavez  USER    PID %CPU  %MEM SZ    RSS TT  STAT TIME  COMMAND  chavez  8684 89.5  9.6 27680 5280 ?  R N  85:26 /home/j90/l988  ...

Another useful place for grep is with man -k. For instance, I once needed to figure out where the error log file was on a new system the machine kept displaying annoying messages from the error log indicating that disk 3 had a hardware failure. Now, I already knew that, and it had even been fixed. I tried man -k error: 64 matches; man -k log was even worse: 122 manual pages. But man -k log | grep error produced only 9 matches, including a nifty command to blast error log entries older than a given number of days.

The awk command is also a useful component in pipes. It can be used to selectively manipulate the output of other commands in a more general way than grep. A complete discussion of awk is beyond the scope of this book, but a few examples will show you some of its capabilities and enable you to investigate others on your own.

One thing awk is good for is picking out and possibly rearranging columns within command output. For example, the following command produces a list of all users running the quake game:

$ ps -ef | grep "[q]uake" | awk '{print $1}'

This awk command prints only the first field from each line of ps output passed to it by grep. The search string for grep may strike you as odd, since the brackets enclose only a single character. The command is constructed that way so that the ps line for the grep command itself will not be selected (since the string "quake" does not appear in it). It's basically a trick to avoid having to add grep -v grep to the pipe between the grep and awk commands.

Once you've generated the list of usernames, you can do what you need to with it. One possibility is simply to record the information in a file:

$ (date ; ps -ef | grep "[q]uake" | awk '{print $1 " [" $7 "]"}' \            | sort | uniq) >> quaked.users

This command sends the list of users currently playing quake, along with the CPU time used so far enclosed in square brackets, to the file quaked.users, preceding the list with the current date and time. We'll see a couple of other ways to use such a list in the course of this chapter.

awk can also be used to sum up a column of numbers. For example, this command searches the entire local filesystem for files owned by user chavez and adds up all of their sizes:

# find / -user chavez -fstype 4.2 ! -name /dev/\* -ls | \   awk '{sum+=$7}; END {print "User chavez total disk use = " sum}'  User chavez total disk use = 41987453

The awk component of this command accumulates a running total of the seventh column from the find command that holds the number of bytes in each file, and it prints out the final value after the last line of its input has been processed. awk can also compute averages; in this case, the average number of bytes per file would be given by the expression sum/NR placed into the command's END clause. The denominator NR is an awk internal variable. It holds the line number of the current input line and accordingly indicates the total number of lines read once all of them have been processed.

awk can be used in a similar way with the date command to generate a filename based upon the current date. For example, the following command places the output of the sys_doc script into a file named for the current date and host:

$ sys_doc  > `date | awk '{print $3 $2 $6}'`.`hostname`.sysdoc

If this command were run on October 24, 2001, on host ophelia, the filename generated by the command would be 24Oct2001.ophelia.sysdoc.

Recent implementations of date allow it to generate such strings on its own, eliminating the need for awk. The following command illustrates these features. It constructs a unique filename for a scratch file by telling date to display the literal string junk_ followed by the day of the month, short form month name, 2-digit year, and hour, minutes and seconds of the current time, ending with the literal string .junk:

$ date +junk_%d%b%y%H%M%S.junk  junk_08Dec01204256.junk

We'll see more examples of grep and awk later in this chapter.

Is All of This Really Necessary?

If all of this fancy pipe fitting seems excessive to you, be assured that I'm not telling you about it for its own sake. The more you know the ins and outs of Unix commands both basic and obscure the better prepared you'll be for the inevitable unexpected events that you will face. For example, you'll be able to come up with an answer quickly when the division director (or department chair or whoever) wants to know what percentage of the aggregate disk space in a local area network is used by the chem group. Virtuosity and wizardry needn't be goals in themselves, but they will help you develop two of the seven cardinal virtues of system administration: flexibility and ingenuity. (I'll tell you what the others are in future chapters.)

3.1.3 Finding Files

Another common command of great use to a system administrator is find. find is one of those commands that you wonder how you ever lived without once you learn it. It has one of the most obscure manual pages in the Unix canon, so I'll spend a bit of time explaining it (skip ahead if it's already familiar).

find locates files with common, specified characteristics, searching anywhere on the system you tell it to look. Conceptually, find has the following syntax:[2]

[2] Syntactically, find does not distinguish between file-selection options and action-related options, but it is often helpful to think of them as separate types as you learn to use find.

# find starting-dir(s) matching-criteria-and-actions

Starting-dir(s) is the set of directories where find should start looking for files. By default, find searches all directories underneath the listed directories. Thus, specifying / as the starting directory would search the entire filesystem.

The matching-criteria tell find what sorts of files you want to look for. Some of the most useful are shown in Table 3-1.

Table 3-1. find command matching criteria options

Option

Meaning

-atime n

File was last accessed exactly n days ago.

-mtime n

File was last modified exactly n days ago.

-newer file

File was modified more recently than file was.

-size n

File is n 512-byte blocks long (rounded up to next block).

-type c

Specifies the file type: f=plain file, d=directory, etc.

-fstype typ

Specifies filesystem type.

-name nam

The filename is nam.

-perm p

The file's access mode is p.

-user usr

The file's owner is usr.

-group grp

The file's group owner is grp.

-nouser

The file's owner is not listed in the password file.

-nogroup

The file's group owner is not listed in the group file.

These may not seem all that useful why would you want a file accessed exactly three days ago, for instance? However, you may precede time periods, sizes, and other numeric quantities with a plus sign (meaning "more than") or a minus sign (meaning "less than") to get more useful criteria. Here are some examples:

-mtime +7      Last modified more than 7 days ago -atime -2      Last accessed less than 2 days ago  -size +100     Larger than 50K 

You can also include wildcards with the -name option, provided that you quote them. For example, the criteria -name '*.dat' specifies all filenames ending in .dat.

Multiple conditions are joined with AND by default. Thus, to look for files last accessed more than two months ago and last modified more than four months ago, you would use these options:

-atime +60 -mtime +120

Options may also be joined with -o for OR combination, and grouping is allowed using escaped parentheses. For example, the matching criteria below specifies files last accessed more than seven days ago or last modified more than 30 days ago:

\( -atime +7 -o -mtime +30 \)

An exclamation point may be used for NOT (be sure to quote it if you're using the C shell). For example, the matching criteria below specify all .dat files except gold.dat:

! -name gold.dat -name \*.dat

The -perm option allows you to search for files with a specific access mode (numeric form). Using an unsigned value specifies files with exactly that permission setting, and preceding the value with a minus sign searches for files with at least the specified access. (In other words, the specified permission mode is XORed with the file's permission setting.) Here are some examples:

-perm 755       Permission = rwxr-xr-x  -perm -002      World-writeable files -perm -4000     Setuid access is set -perm -2000     Setgid access is set 

The actions options tell find what to do with each file it locates that matches all the specified criteria. Some available actions are shown in Table 3-2.

Table 3-2. find actions

Option

Meaning

-print

Display pathname of matching file.

-ls[3]

Display long directory listing for matching file.

-exec cmd

Execute command on file.

-ok cmd

Prompt before executing command on file.

-xdev

Restrict the search to the filesystem of the starting directory (typically used to bypass mounted remote filesystems).

-prune

Don't descend into directories encountered.

[3] Not available under HP-UX.

The default on many newer systems is -print, although forgetting to include it on older systems like SunOS will result in a successful command with no output. Commands for -exec and -ok must end with an escaped semicolon ( \ ;). The form {} may be used in commands as a placeholder for the pathname of each found file. For example, to delete each matching file as it is found, specify the following option to the find command:

-exec rm -f {} \;

Note that there are no spaces between the opening and closing curly braces. The curly braces may only appear once within the command.

Now let's put the parts together. The command below lists the pathname of all C source files under the current directory:

$ find . -name \*.c -print

The starting directory is "." (the current directory), the matching criteria specify filenames ending in .c, and the action to be performed is to display the pathname of each matching file. This is a typical user use for find. Other common uses include searching for misplaced files and feeding file lists to cpio.

find has many administrative uses, including:

  • Monitoring disk use

  • Locating files that pose potential security problems

  • Performing recursive file operations

For example, find may be used to locate large disk files. The command below displays a long directory listing for all files under /chem larger than 1 MB (2048 512-byte blocks) that haven't been modified in a month:

$ find /chem -size +2048 -mtime +30 -exec ls -l {} \;

Of course, we could also use -ls rather than the -exec clause. In fact, it is more efficient because the directory listing is handled by find internally (rather than having to spawn a subshell for every file). To search for files not modified in a month or not accessed in three months, use this command:

$ find /chem -size +2048 \( -mtime +30 -o -atime +120 \) -ls

Such old, large files might be candidates for tape backup and deletion if disk space is short.

find can also delete files automatically as it finds them. The following is a typical administrative use of find, designed to automatically delete old junk files on the system:

# find / \( -name a.out -o -name core -o -name '*~'\      -o -name '.*~' -o -name '#*#' \) -type f -atime +14 \      -exec rm -f {} \; -o -fstype nfs -prune

This command searches the entire filesystem and removes various editor backup files, core dump files, and random executables (a.out) that haven't been accessed in two weeks and that don't reside on a remotely mounted filesystem. The logic is messy: the final -o option ORs all the options that preceded it with those that followed it, each of which is computed separately. Thus, the final operation finds files that match either of two criteria:

  • The filename matches, it's a plain file, and it hasn't been accessed for 14 days.

  • The filesystem type is nfs (meaning a remote disk).

If the first criteria set is true, the file gets removed; if the second set is true, a "prune" action takes place, which says "don't descend any lower into the directory tree." Thus, every time find comes across an NFS-mounted filesystem, it will move on, rather than searching its entire contents as well.

Matching criteria and actions may be placed in any order, and they are evaluated from left to right. For example, the following find command lists all regular files under the directories /home and /aux1 that are larger than 500K and were last accessed over 30 days ago (done by the options through -print); additionally, it removes those named core:

# find /home /aux1 -type f -atime +30 -size +1000 -print \      -name core -exec rm {} \;

find also has security uses. For example, the following find command lists all files that have setuid or setgid access set (see Chapter 7).

# find / -type f \( -perm -2000 -o -perm -4000 \) -print

The output from this command could be compared to a saved list of setuid and setgid files, in order to locate any newly created files requiring investigation:

# find / \( -perm -2000 -o -perm -4000 \) -print | \    diff - files.secure

find may also be used to perform the same operation on a selected group of files. For example, the command below changes the ownership of all the files under user chavez's home directory to user chavez and group physics:

# find /home/chavez -exec chown chavez {} \; \                      -exec chgrp physics {} \;

The following command gathers all C source files anywhere under /chem into the directory /chem1/src:

# find /chem -name '*.c' -exec mv {} /chem1/src \;

Similarly, this command runs the script prettify on every C source file under /chem:

# find /chem -name '*.c' -exec /usr/local/bin/prettify {} \;

Note that the full pathname for the script is included in the -exec clause.

Finally, you can use the find command as a simple method for tracking changes that have been made to a system in the course of a certain time period or as the result of a certain action. Consider these commands:

# touch /tmp/starting_time # perform some operation # find / -newer /tmp/starting_time

The output of the final find command displays all files modified or added as a result of whatever action was performed. It does not directly tell you about deleted files, but it lists modified directories (which can be an indirect indication).

3.1.4 Repeating Commands

find is one solution when you need to perform the same operation on a group of files. The xargs command is another way of automating similar commands on a group of objects; xargs is more flexible than find because it can operate on any set of objects, regardless of what kind they are, while find is limited to files and directories.

xargs is most often used as the final component of a pipe. It appends the items it reads from standard input to the Unixcommand given as its argument. For example, the following command increases the nice number of all quake processes by 10, thereby lowering each process's priority:

# ps -ef | grep "[q]uake" | awk '{print $2}' | xargs renice +10

The pipe preceding the xargs command extracts the process ID from the second column of the ps output for each instance of quake, and then xargs runs renice using all of them. The renice command takes multiple process IDs as its arguments, so there is no problem sending all the PIDs to a single renice command as long as there are not a truly inordinate number of quake processes.

You can also tell xargs to send its incoming arguments to the specified command in groups by using its -n option, which takes the number of items to use at a time as its argument. If you wanted to run a script for each user who is currently running quake, for example, you could use this command:

# ps -ef | grep "[q]uake" | awk '{print $1}' | xargs -n1 warn_user

The xargs command will take each username in turn and use it as the argument to warn_user.

So far, all of the xargs commands we've look at have placed the incoming items at the end of the specified command. However, xargs also allows you to place each incoming line of input at a specified position within the command to be executed. To do so, you include its -i option and use the form {} as placeholder for each incoming line within the command. For example, this command runs the System V chargefee utility for each user running quake, assessing them 10000 units:

# ps -ef | grep "[q]uake" | awk '{print $1}' | \   xargs -i chargefee {} 10000

If curly braces are needed elsewhere within the command, you can specify a different pair of placeholder characters as the argument to -i.

Substitutions like this can get rather complicated. xargs's -t option displays each constructed command before executing, and the -p option allows you to selectively execute commands by prompting you before each one. Using both options together provides the safest execution mode and also enables you to nondestructively debug a command or script by answering no for every offered command.

-i and -n don't interact the way you might think they would. Consider this command:

$ echo a b c d e f | xargs -n3 -i echo before {} after  before a b c d e f after  $ echo a b c d e f | xargs -i -n3 echo before {} after  before {} after a b c  before {} after d e f

You might expect that these two commands would be equivalent and that they would both produce two lines of output:

before a b c after  before d e f after

However, neither command produces this output, and the two commands do not operate identically. What is happening is that -i and -n conflict with one another, and the one appearing last wins. So, in the first command, -i is what is operative, and each line of input is inserted into the echo command. In the second command, the -n3 option is used, three arguments are placed at the end of each echo command, and the curly braces are treated as literal characters.

Our first use of -i worked properly because the usernames are coming from separate lines in the ps command output, and these lines are retained as they flow through the pipe to xargs.

If you want xargs to execute commands containing pipes, I/O redirection, compound commands joined with semicolons, and so on, there's a bit of a trick: use the -c option to a shell to execute the desired command. I occasionally want to look at the final lines of a group of files and then view all of them a screen at a time. In other words, I'd like to run a command like this and have it "work":

$ tail test00* | more

On most systems, this command displays lines only from the last file. However, I can use xargs to get what I want:

$ ls -1 test00* | xargs -i /usr/bin/sh -c \    'echo "****** {}:"; tail -15 {}; echo ""' | more

This displays the last 15 lines of each file, preceded by a header line containing the filename and followed by a blank line for readability.

You can use a similar method for lots of other kinds of repetitive operations. For example, this command sorts and de-dups all of the .dat files in the current directory:

$ ls *.dat | xargs -i /usr/bin/sh -c "sort -u -o {} {}"

3.1.5 Creating Several Directory Levels at Once

Many people are unaware of the options offered by the mkdir command. These options allow you to set the file mode at the same time as you create a new directory and to create multiple levels of subdirectories with a single command, both of which can make your use of mkdir much more efficient.

For example, each of the following two commands sets the mode on the new directory to rwxr-xr-x, using mkdir's -m option:

$ mkdir -m 755 ./people  $ mkdir -m u=rwx,go=rx ./places

You can use either a numeric mode or a symbolic mode as the argument to the -m option. You can also use a relative symbolic mode, as in this example:

$ mkdir -m g+w ./things

In this case, the mode changes are applied to the default mode as set with the umask command.

mkdir's -p option tells it to create any missing parents required for the subdirectories specified as its arguments. For example, the following command will create the subdirectories ./a and ./a/b if they do not already exist and then create ./a/b/c:

$ mkdir -p ./a/b/c

The same command without -p will give an error if all of the parent subdirectories are not already present.

3.1.6 Duplicating an Entire Directory Tree

It is fairly common to need to move or duplicate an entire directory tree, preserving not only the directory structure and file contents but also the ownership and mode settings for every file. There are several ways to accomplish this, using tar, cpio, and sometimes even cp. I'll focus on tar and then look briefly at the others at the end of this section.

Let's make this task more concrete and assume we want to copy the directory /chem/olddir as /chem1/newdir (in other words, we want to change the name of the olddir subdirectory as part of duplicating its entire contents). We can take advantage of tar's -p option, which restores ownership and access modes along with the files from an archive (it must be run as root to set file ownership), and use these commands to create the new directory tree:

# cd /chem1 # tar -cf - -C /chem olddir | tar -xvpf - # mv olddir newdir

The first tar command creates an archive consisting of /chem/olddir and all of the files and directories underneath it and writes it to standard output (indicated by the - argument to the -f option). The -C option sets the current directory for the first tar command to /chem. The second tar command extracts files from standard input (again indicated by -f -), retaining their previous ownership and protection. The second tar command gives detailed output (requested with the -v option). The final mv command changes the name of the newly created subdirectory of /chem1 to newdir.

If you want only a subset of the files and directories under olddir to be copied to newdir, you would vary the previous commands slightly. For example, these commands copy the src, bin, and data subdirectories and the logfile and .profile files from olddir to newdir, duplicating their ownership and protection:

# mkdir /chem1/newdir  set ownership and protection for newdir if necessary # cd /chem1/olddir # tar -cvf - src bin data logfile.* .profile  |\   tar -xvpf - -C /chem/newdir 

The first two commands are necessary only if /chem1/newdir does not already exist.

This command performs a similar operation, copying only a single branch of the subtree under olddir:

# mkdir /chem1/newdir  set ownership and protection for newdir if necessary # cd /chem1/newdir # tar -cvf - -C /chem/olddir src/viewers/rasmol | tar -xvpf -

These commands create /chem1/newdir/src and its viewers subdirectory but place nothing in them but rasmol.

If you prefer cpio to tar, cpio can perform similar functions. For example, this command copies the entire olddir tree to /chem1 (again as newdir):

# mkdir /chem1/newdir  set ownership and protection for newdir if necessary # cd /chem1/olddir # find . -print | cpio -pdvm /chem1/newdir

On all of the systems we are considering, the cp command has a -p option as well, and these commands create newdir:

# cp -pr /chem/olddir /chem1 # mv /chem1/olddir /chem1/newdir

The -r option stands for recursive and causes cp to duplicate the source directory structure in the new location.

Be aware that tar works differently than cp does in the case of symbolic links. tar recreates links in the new location, while cp converts symbolic links to regular files.

3.1.7 Comparing Directories

Over time, the twodirectories we considered in the last section will undoubtedly both change. At some future point, you might need to determine the differences between them. dircmp is a special-purpose utility designed to perform this very operation.[4] dircmp takes the directories to be compared as its arguments:

[4] On FreeBSD and Linux systems, diff -r provides the equivalent functionality.

$ dircmp /chem/olddir /chem1/newdir

dircmp produces voluminous output even when the directories you're comparing are small. There are two main sections to the output. The first one lists files that are present in only one of the two directory trees:

Mon Jan 4 1995 /chem/olddir only and /chem1/newdir only  Page 1  ./water.dat                  ./hf.dat  ./src/viewers/rasmol/init.c  ./h2f.dat  ...

All pathnames in the report are relative to the directory locations specified on the command line. In this case, the files in the left column are present only under /chem/olddir, and those in the right column are present only at the new location.

The second part of the report indicates whether the files present in both directory trees are the same or different. Here are some typical lines from this section of the report:

same        ./h2o.dat  different   ./hcl.dat 

The default output from dircmp indicates only whether the corresponding files are the same or not, and sometimes this is all you need to know. If you want to know exactly what the differences are, you can include the -d to dircmp, which tells it to run diff for each pair of differing files (since it uses diff, this works only for text files). On the other hand, if you want to decrease the amount of output by limiting the second section of the report to files that differ, include the -s option on the dircmp command.

3.1.8 Deleting Pesky Files

When I teach courses for new Unix users, one of the earlyexercises consists of figuring out how to delete the files -delete_me and delete me (with the embedded space in the second case).[5] Occasionally, however, a user winds up with a file that he just can't get rid of, no matter how creative he is in using rm. At that point, he will come to you. If there is a way to get rm to do the job, show it to him, but there are some files that rm just can't handle. For example, it is possible for some buggy application program to put a file into a bizarre, inconclusive state. Users can also create such files if they experiment with certain filesystem manipulation tools (which they probably shouldn't be using in the first place).

[5] There are lots of solutions. One of the simplest is rm delete\ me ./-delete_me.

One tool that can take care of such intransigent files is the directory editor feature of the GNU emacs text editor. It is also useful to show this feature to users who just can't get the hang of how to quote strange filenames.

This is the procedure for deleting a file with emacs:

  1. Invoke emacs on the directory in question, either by including its path on the command line or by entering its name at the prompt produced by Ctrl-X Ctrl-F.

  2. Opening the directory causes emacs to automatically enter its directory editing mode. Move the cursor to the file in question using the usual emacs commands.

  3. Enter a d, which is the directory editing mode subcommand to mark a file for deletion. You can also use u to unmark a file, # to mark all auto-save files, and ~ to mark all backup files.

  4. Enter the x subcommand, which says to delete all marked files, and answer the confirmation prompt in the affirmative.

  5. At this point the file will be gone, and you can exit from emacs, continue other editing, or do whatever you need to do next.

emacs can also be useful for viewing directory contents when they include files with bizarre characters embedded within them. The most amusing example of this that I can cite is a user who complained to me that the ls command beeped at him every time he ran it. It turned out that this only happened in his home directory, and it was due to a file with a Ctrl-G in the middle of the name. The filename looked fine in ls listings because the Ctrl-G character was being interpreted, causing the beep. Control characters become visible when you look at the directory in emacs, and so the problem was easily diagnosed and remedied (using the r subcommand to emacs's directory editing mode that renames a file).

3.1.9 Putting a Command in a Cage

As we'll discuss in detail later, system security inevitably involves tradeoffs between convenience and risk. One way to mitigate the risks arising from certain inherently dangerous commands and subsystems is to isolate them from the rest of the system. This is accomplished with the chroot command.

The chroot command runs another command froman alternate location within the filesystem, making the command think that that the location is actually the root directory of the filesystem. chroot takes one argument, which is the alternate top-level directory. For example, the following command runs the sendmail daemon, using the directory /jail as the new root directory:

# chroot /jail sendmail -bd -q10m

The sendmail process will treat /jail as its root directory. For example, when sendmail looks for the mail aliases database, which it expects to be located in /etc/aliases, it will actually access the file /jail/etc/aliases. In order for sendmail to work properly in this mode, a minimal filesystem needs to be set up under /jail containing all the files and directories that sendmail needs.

Running a daemon or subsystem as a user created specifically for that purpose (rather than root) is sometimes called sandboxing. This security technique is recommended wherever feasible, and it is often used in conjunction with chrooting for added security. See Section 8.1 for a detailed example of this technique.

FreeBSD also has a facility called jail, which is a stronger versions of chroot that allows you to specify access restrictions for the isolated command.

3.1.10 Starting at the End

Perhaps it's appropriate that we consider the tail command near the end of this section on administrative uses of common commands. tail's principal function is to display the last 10 lines of a file (or standard input). tail also has a -f option that displays new lines as they are added to the end of a file; this mode can be useful for monitoring the progress of a command that writes periodic status information to a file. For example, these commands start a background backup with tar, saving its output to a file, and monitor the operation using tail -f:

$ tar -cvf /dev/rmt1 /chem /chem1 > 24oct94_tar.toc &  $ tail -f 24oct94_tar.toc

The information that tar displays about each file as it is written to tape is eventually written to the table of contents file and displayed by tail. The advantage that this method has over the tee command is that the tail command may be killed and restarted as many times as you like without affecting the tar command.

Some versions of tail also include a -r option, which will display the lines in a file in reverse order, which is occasionally useful. HP-UX does not support this option, and Linux provides this feature in the tac command.

3.1.11 Be Creative

As a final example of thecreative use of ordinary commands, consider the following dilemma. A user tells you his workstation won't reboot. He says he was changing his system's boot script but may have deleted some files in /etc accidentally. You go over to it, type ls, and get a message about some missing shared libraries. How do you poke around and find out what files are there?

The answer is to use the simplest Unix command there is, echo, along with the wildcard mechanism, both of which are built into every shell, including the statically linked one available in single user mode.

To see all the files in the current directory, just type:

$ echo *

This command tells the shell to display the value of "*", which of course expands to all files not beginning with a period in the current directory.

By using echo together with cd (also a built-in shell command), I was able to get a pretty good idea of what had happened. I'll tell you the rest of this story at the end of Chapter 4.



Essential System Administration
Essential System Administration, Third Edition
ISBN: 0596003439
EAN: 2147483647
Year: 2002
Pages: 162

Similar book on Amazon

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net