Useful Commands

   


Solaris contains a huge number of commands, all of which are useful in their own right and most of which are more relevant to the system administrator than the system manager. As such, these commands are very well documented in system administration texts , so they are not discussed here. The objective of this section is to show a few commands and combinations to make the system manager's job easier. This particularly applies to the management and manipulation of data files, predominately those used as logging facilities for different processes or applications. These files can easily become numerous if a new log file is created each time the process is run, or large, if logging data is constantly being appended to a log file, for example.

Combinations of Popular Commands

Commands are combined using the pipe () so that the output from one command becomes the input to the next , and so on. The equivalent of an entire shell script often can be written on the command line using this facility. Consider the following example, which can be used to move large numbers of files and directories in one simple action.

To move the entire contents of /data into a new location, called /newdata, the following single command line will copy the whole directory structure:

 cd /data; tar cf - *  (cd /newdata; tar xf -) 

The command breaks down as follows :

  1. Move to the source directory (/data).

  2. Execute the tar command to archive all files in the current directory and write this to the standard output (this is normally the screen).

  3. The output is piped into the next command, which is actually two commands because of the parentheses. This first changes to the destination directory (/newdata) and then extracts the archive that was just created into the current directory. The parentheses ensure that the two commands are handled together.

The most frequently used combinations involve the commands grep , awk , and sed . These commands have extensive documentation ”entire volumes are dedicated to the workings of sed and awk , so these commands are not discussed here in any detail.

Consider the following command example, though, to see the sort of command that can be easily executed.

The command itself spans two lines and extracts the process ID (pid) of the top process and then uses it as a parameter to the command ptree , which displays the process tree, including all parent processes as well as children processes. This can be very useful when trying to identify (and kill) the calling program of a hung process.

 aries> treepid=`ps -ef grep topgrep -v grep awk '{print }'`; \  export treepid; /usr/proc/bin/ptree $treepid  166   /usr/sbin/inetd -s    28964  in.telnetd      28966 -ksh        882   top  aries> 
find

The find command is arguably one of the most powerful commands available in Solaris. A popular use for it, however, is in maintenance of file systems, where find is used to delete, say, all files matching a certain pattern or residing in a specific file system that are more than a specified number of days old.

For example, find could be used in the deletion of all core files (dump files generated by a process or application failure). This is a popular one because you might have a problem with a particular application being used by 100 different users. Each time the failure is encountered , a core file would be created in the users' current directory, potentially creating 100 different core files in 100 different directories. The system administrator would require only one example to analyze or pass to the developers. Because these files can be quite sizeable (several megabytes), they can soon consume considerable amounts of disk space. The command to delete all core files more than seven days old throughout the system is shown here, although it would normally be run as a cron job on a daily basis (probably overnight).

As the superuser (root):

 #find / -name core -mtime +7 -exec rm -f {} \;  # 

Notice that there is no output as a result ”in this case, find just goes away and carries out the operation. The only sign that it has finished is the return of the shell prompt.

The system manager can use the find command for other purposes, maybe to run a periodic check for files belonging to user root and having world write permission allowed. This combination presents a security risk and should be discouraged, so if the find command does locate any matching files, it can automatically remove the world writeable bit to make it more secure and then can log the name of the file. The command to do this and save the output (and any error messages) into a log file is shown here:

 #find / -type f -user root -perm -o+w -print -exec chmod o-w {} \;  >>/var/adm/rootsec.log  2>&1  # 
fold

The fold command imposes a line length for a file, which is specified by the user and, rather than truncating, folds the data onto subsequent lines. This command is particularly useful when a data file is to be printed but has a greater line length than the printer. For example, the system manager might have a data file from a process that needs to be printed that has 155 columns per line. The printer outputs onto A4 paper in portrait format and therefore can print only 80 columns per line. In this instance, the printer normally would truncate each line to 80 characters and would discard the remaining columns . Using fold , however, retains the extra 75 characters of data and "folds" them onto the next line, thereby printing the entire file and also keeping the data in a contiguous format.

As shown in Figure 9.3, the window displays only the portion of the line that is currently visible. The scrollbar at the bottom of the window would have to be used to view the remainder of the line. If this document were printed on an 80-column printer, then this is all that would appear; any extra columns would be lost.

Figure 9.3. Here, a data file (tedit1) is being viewed using the text editor application, available with the CDE windowing environment.

graphics\09fig03.gif

The fold command inserts a newline character after the specified number of columns have been reached, creating an extra line containing the remainder of the data. Figure 9.4 shows the same data file after running this fold command and saving it to a new file (tedit2):

Figure 9.4. The text editor window still displays the visible portion of the line, but there is now considerably more data. If this document were printed on an 80-column printer, the entire contents of the file would be printed.

graphics\09fig04.gif

 #fold  -w  80  tedit1  >>tedit2 
split

The split command chops a file into a specified number of smaller files, according to either size or the number of lines in the file. Sometimes a file will be too big to be manually edited using, say, the vi editor, so the split command can be used to produce a number of smaller files that are easier to view.

Another, probably more useful, way of using split is for recycling cumulative log files. Suppose that a log file is appended to on a regular basis and becomes large. Although it might be tempting to just delete the file every now and then and start again with a new log file, it might be a better solution to retain, say, the latest (most recent) 10% of the file. In this way, the latest log information is not discarded until the next time the file is split. By doing this, the file size (and number of lines) is reduced by 90%, reclaiming disk space and retaining potentially valuable information.

As an example, suppose that the file logfile is used as a cumulative log for a specific application process. Over a period of a week, its size grows to just under 500,000 lines. An automatic cron job could be initiated to remove 90% of the file, keeping the latest 10% for another week so that the information is available if it is required.

The following command will create 10 files, each containing 10% of the cumulative log file:

 split -l `expr \`wc -l logfile awk '{print }'\` / 10` logfile log_ 

This command basically takes a line count of the file logfile, divides it by 10, and uses the answer as the argument to the split command. It creates the 10 files log_aa to log_aj. Listing 9.9 shows the line count for the file logfile, followed by the line counts for the 10 derived files.

Listing 9.9 Sample Output Demonstrating That a File Can Be Split Equally
 $ wc -l  logfile    494280  logfile  $ wc -l  log_*     49428 log_aa     49428 log_ab     49428 log_ac     49428 log_ad     49428 log_ae     49428 log_af     49428 log_ag     49428 log_ah     49428 log_ai     49428 log_aj    494280 total 

The only thing left to do is to rename the file log_aj to logfile, replacing the original large file and delete log_a* ”that is, the remaining nine files that were created as a result of running the split command.

unix2dos and dos2unix

These two commands are used when transferring files between the Solaris environment and the PC environment. The dos2unix command converts a file from DOS format to an ISO ASCII format so that it can be read correctly in the Solaris environment. The unix2dos command does the conversion the other way so that a file can be read correctly in the DOS environment. The only limitation with these commands is that the filename(s) must be in accordance with the environment in which the command is run ”that is, 8.3 format if run from the DOS environment.

As an example, suppose that the Oracle listener configuration file tnsnames.ora is to be copied to the Solaris environment from a PC rather than creating a new one. Figure 9.5 shows the file being edited with the vi editor after it has simply been transferred using FTP without any conversion.

Figure 9.5. The vi editor window displays the extra return characters that the DOS format includes, depicted by ^M at the end of each line.

graphics\09fig05.gif

To process this file correctly, the following dos2unix command was run:

 #dos2unix  tnsnames.ora  tnsnames.ora  # 

In this case, I chose to overwrite the file with the properly formatted version of the file, although any output filename can be chosen . Figure 9.6 shows the converted file being edited with the vi editor after the conversion has taken place.

Figure 9.6. The extra characters have been stripped away, and the file will now process correctly.

graphics\09fig06.gif

head and tail

The head and tail commands display either the top or the bottom of a file, respectively. For example, the head command is useful when a data file contains a header on the first line detailing maybe a total record count. In this case, the following command will select only the first line:

 #head -1 datafile  00987764  # 

In this case, the total record count value can be used as a validation check to ensure that the entire file was read and that the number of records matched the expected total.

By default, head reads the first 10 lines of the specified file if no value is entered.

The tail command works in a similar way to the head command, except that it is used to examine the lines at the end of a file. By default, tail shows the last 10 lines if no numeric value is entered. The tail command is often used to look at the last line of a log file, for example, to see if the processing has completed so that the next task can be initiated. This is frequently used when data files are received on an ad-hoc basis rather than at scheduled intervals.

An extremely useful feature of tail is the -f flag, which shows any lines that are subsequently added to the end of a file. This is particularly useful when monitoring a process that is writing to a log file because all entries will be displayed on the screen as they are written to the log file.

od (Octal Dump)

The od command has value for the system manager (and administrators) because it can be used to display nonprintable characters. This is especially useful when filenames contain such characters and cannot be easily deleted. Consider the following example, in which the filename consists only of five <tab> characters. The od command with the -c switch displays the characters clearly so that the file can now be deleted.

Listing 9.10 shows the five tabs file being identified and subsequently deleted.

Listing 9.10 An Annotated Screen Session Using the od Command
 # ls                                            <<< Here's the file, but it contains  nonprintable characters  # ls  od -c  0000000  \t  \t  \t  \t  \t  \n  <<< od has identified the characters  0000006  # rm \         \        \        \        \  <<< The file can now be   explicitly deleted  # ls  #                                         <<< All gone 

Filenames such as those listed are normally a result of some typing error and cause minor inconvenience, but a file of this kind might have been left by an unauthorized guest ”it is not obviously detected and would not be automatically deleted by routine housekeeping processes.

Some companies have a policy to alias the rm command to include the -i flag so that a prompt is always issued before a file is deleted. This method can also be used to delete files containing nonprintable characters.

A further way of removing unwanted or invalid characters is to use the tr command to translate these characters into something that is readable. This command is popularly used to force user responses into upper case, for example.


   
Top


Solaris System Management
Solaris System Management (New Riders Professional Library)
ISBN: 073571018X
EAN: 2147483647
Year: 2001
Pages: 101
Authors: John Philcox

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net