Shell Basics

 < Day Day Up > 

This section covers writing and using startup files, redirecting standard error, writing and executing simple shell scripts, separating and grouping commands, implementing job control, and manipulating the directory stack.

Startup Files

When a shell starts, it runs startup files to initialize itself. Which files the shell runs depends on whether it is a login shell, an interactive shell that is not a login shell (such as you get by giving the command bash), or a noninteractive shell (one used to execute a shell script). You must have read access to a startup file to execute the commands in it. Typically Linux distributions put appropriate commands in some of these files. This section covers bash startup files. See page 342 for information on tcsh startup files.

Login Shells

The files covered in this section are executed by login shells and shells that you start with the login option. Login shells are, by their nature, interactive.

/etc/profile

The shell first executes the commands in /etc/profile. Superuser can set up this file to establish systemwide default characteristics for bash users.

.bash_profile .bash_login .profile

Next the shell looks for ~/.bash_profile, ~/.bash_login, and ~/.profile (~/ is shorthand for your home directory), in that order, executing the commands in the first of these files it finds. You can put commands in one of these files to override the defaults set in /etc/profile.

.bash_logout

When you log out, bash executes commands in the ~/.bash_logout file. Frequently commands that clean up after a session, such as those that remove temporary files, go in this file.

Interactive Nonlogin Shells

The commands in the preceding startup files are not executed by interactive, nonlogin shells. However, these shells inherit from the login shell variables that are set by these startup files.

/etc/bashrc

Although not called by bash directly, many ~/.bashrc files call /etc/bashrc. This setup allows Superuser to establish systemwide default characteristics for nonlogin bash shells.

.bashrc

An interactive nonlogin shell executes commands in the ~/.bashrc file. Typically a startup file for a login shell, such as .bash_profile, runs this file, so that both login and nonlogin shells benefit from the commands in .bashrc.

Noninteractive Shells

The commands in the previously described startup files are not executed by noninteractive shells, such as those that runs shell scripts. However, these shells inherit from the login shell variables that are set by these startup files.

BASH_ENV

Noninteractive shells look for the environment variable BASH_ENV (or ENV, if the shell is called as sh) and execute commands in the file named by this variable.

Setting Up Startup Files

Although many startup files and types of shells exist, usually all you need are the .bash_profile and .bashrc files in your home directory. Commands similar to the following in .bash_profile run commands from .bashrc for login shells (when .bashrc exists). With this setup, the commands in .bashrc are executed by login and nonlogin shells.

 if [ -f ~/.bashrc ]; then source ~/.bashrc; fi 

The [ f ~/.bashrc ] tests whether the file named .bashrc in your home directory exists. See page 794 for more information on test and its synonym [ ].

tip: Use .bash_profile to set PATH

Because commands in .bashrc may be executed many times, and because subshells inherit exported variables, it is a good idea to put commands that add to existing variables in the .bash_profile file. For example, the following command adds the bin subdirectory of the home directory to PATH (page 284) and should go in .bash_profile:

 PATH=$PATH:$HOME/bin 

When you put this command in .bash_profile and not in .bashrc, the string is added to the PATH variable only once, when you log in.

Modifying a variable in .bash_profile allows changes you make in an interactive session to propagate to subshells. In contrast, modifying a variable in .bashrc overrides changes inherited from a parent shell.


Sample .bash_profile and .bashrc files follow. Some of the commands used in these files are not covered until later in this chapter. In any startup file, you must export variables and functions that you want to be available to child processes. For more information refer to "Locality of Variables" on page 475.

 $ cat ~/.bash_profile if [ -f ~/.bashrc ]; then     source ~/.bashrc               # read local startup file if it exists fi PATH=$PATH:.                       # add the working directory to PATH export PS1='[\h \W \!]\$ '         # set prompt 

The first command in the preceding .bash_profile file executes the commands in the user's .bashrc file if it exists. The next command adds to the PATH variable (page 284). Typically PATH is set and exported in /etc/profile so it does not need to be exported in a user's startup file. The final command sets and exports PS1 (page 286), which controls the user's prompt.

Next is a sample .bashrc file. The first command executes the commands in the /etc/bashrc file if it exists. Next the LANG (page 290) and VIMINIT (page 176) variables are set and exported and several aliases (page 312) are established. The final command defines a function (page 315) that swaps the names of two files.

 $ cat ~/.bashrc if [ -f /etc/bashrc ]; then    source /etc/bashrc          # read global startup file if it exists  fi set -o noclobber               # prevent overwriting files unset MAILCHECK                # turn off "you have new mail" notice export LANG=C                  # set LANG variable export VIMINIT='set ai aw'     # set vim options alias df='df -h'               # set up aliases alias rm='rm -i'               # always do interactive rm's alias lt='ls -ltrh | tail' alias h='history | tail' alias ch='chmod 755 ' function switch()              # a function to exchange the names {                              # of two files     local tmp=$$switch     mv "$1" $tmp     mv "$2" "$1"     mv $tmp "$2" } 

. (Dot) OR source: Runs a Startup File in the Current Shell

After you edit a startup file such as .bashrc, you do not have to log out and log in again to put the changes into effect. You can run the startup file using the . (dot) or source builtin (they are the same command under bash; only source is available under tcsh [page 380]). As with all other commands, the . must be followed by a SPACE on the command line. Using the . or source builtin is similar to running a shell script, except that these commands run the script as part of the current process. Consequently, when you use . or source to run a script, changes you make to variables from within the script affect the shell that you run the script from. You can use the . or source command to run any shell script not just a startup file but undesirable side effects (such as changes in the values of shell variables you rely on) may occur. If you ran a startup file as a regular shell script and did not use the . or source builtin, the variables created in the startup file would remain in effect only in the subshell running the script not in the shell you ran the script from. For more information refer to "Locality of Variables" on page 475.

In the following example, .bashrc sets several variables and sets PS1, the prompt, to the name of the host. The . builtin puts the new values into effect.

 $ cat ~/.bashrc export TERM=vt100             # set the terminal type export PS1="$(hostname -f): " # set the prompt string export CDPATH=:$HOME          # add HOME to CDPATH string stty kill '^u'                # set kill line to control-u $ . ~/.bashrc bravo.example.com: 

Commands That Are Symbols

The Bourne Again Shell uses the symbols (, ), [, ], and $ in a variety of ways. To minimize confusion, Table 8-1 lists the most common use of each of these symbols, even though some of them are not introduced until later.

Table 8-1. Builtin commands that are symbols

Symbol

Command

( )

Subshell (page 270)

$( )

Command substitution (page 329)

(( ))

Arithmetic evaluation; a synonym for let (use when the enclosed value contains an equal sign) (page 501)

$(( ))

Arithmetic expansion (not for use with an enclosed equal sign) (page 327)

[ ]

The test command. (pages 437, 440, 453, and 794)

[[ ]]

Conditional expression; similar to [ ] but adds string comparisons (page 503)


Redirecting Standard Error

Chapter 5 covered the concept of standard output and explained how to redirect standard output of a command. In addition to standard output, commands can send output to standard error. A command can send error messages to standard error to keep them from getting mixed up with the information it sends to standard output.

Just as it does with standard output, by default the shell sends a command's standard error to the screen. Unless you redirect one or the other, you may not know the difference between the output a command sends to standard output and the output it sends to standard error. This section covers the syntax used by the Bourne Again Shell. See page 349 if you are using the TC Shell.

File descriptors

A file descriptor is the place a program sends its output to and gets its input from. When you execute a program, the process running the program opens three file descriptors: 0 (standard input), 1 (standard output), and 2 (standard error). The redirect output symbol (> [page 116]) is shorthand for 1>, which tells the shell to redirect standard output. Similarly < (page 118) is short for 0<, which redirects standard input. The symbols 2> redirect standard error. For more information refer to "File Descriptors" on page 470.

The following examples demonstrate how to redirect standard output and standard error to different files and to the same file. When you run the cat utility with the name of a file that does not exist and the name of a file that does exist, cat sends an error message to standard error and copies the file that does exist to standard output. Unless you redirect them, both messages appear on the screen.

 $ cat y This is y. $ cat x cat: x: No such file or directory $ cat x y cat: x: No such file or directory This is y. 

When you redirect standard output of a command, output sent to standard error is not affected and still appears on the screen.

 $ cat x y > hold cat: x: No such file or directory $ cat hold This is y. 

Similarly, when you send standard output through a pipe, standard error is not affected. The following example sends standard output of cat through a pipe to tr (page 804), which in this example converts lowercase characters to uppercase. The text that cat sends to standard error is not translated because it goes directly to the screen rather than through the pipe.

 $ cat x y | tr "[a-z]" "[A-Z]" cat: x: No such file or directory THIS IS Y. 

The following example redirects standard output and standard error to different files. The notation 2> tells the shell where to redirect standard error (file descriptor 2). The 1> tells the shell where to redirect standard output (file descriptor 1). You can use > in place of 1>.

 $ cat x y 1> hold1 2> hold2 $ cat hold1 This is y. $ cat hold2 cat: x: No such file or directory 

Duplicating a file descriptor

In the next example, 1> redirects standard output to hold. Then 2>&1 declares file descriptor 2 to be a duplicate of file descriptor 1. As a result both standard output and standard error are redirected to hold.

 $ cat x y 1> hold 2>&1 $ cat hold cat: x: No such file or directory This is y. 

In the preceding example, 1> hold precedes 2>&1. If they had been listed in the opposite order, standard error would have been made a duplicate of standard output before standard output was redirected to hold. In that case only standard output would have been redirected to hold.

The next example declares file descriptor 2 to be a duplicate of file descriptor 1 and sends the output for file descriptor 1 through a pipe to the tr command.

 $ cat x y 2>&1 | tr "[a-z]" "[A-Z]" CAT: X: NO SUCH FILE OR DIRECTORY THIS IS Y. 

Sending errors to standard error

You can also use 1>&2 to redirect standard output of a command to standard error. This technique is often used in shell scripts to send the output of echo to standard error. In the following script, standard output of the first echo is redirected to standard error:

 $ cat message_demo echo This is an error message. 1>&2 echo This is not an error message. 

If you redirect standard output of message_demo, error messages such as the one produced by the first echo will still go to the screen because you have not redirected standard error. Because standard output of a shell script is frequently redirected to another file, you can use this technique to display on the screen error messages generated by the script. The lnks script (page 445) uses this technique. You can also use the exec builtin to create additional file descriptors and to redirect standard input, standard output, and standard error of a shell script from within the script (page 491).

The Bourne Again Shell supports the redirection operators shown in Table 8-2.

Table 8-2. Redirection operators

Operator

Meaning

< filename

Redirects standard input from filename.

> filename

Redirects standard output to filename unless filename exists and noclobber (page 119) is set. If noclobber is not set, this redirection creates filename if it does not exist.

>| filename

Redirects standard output to filename, even if the file exists and noclobber (page 119) is set.

>> filename

Redirects and appends standard output to filename unless filename exists and noclobber (page 119) is set. If noclobber is not set, this redirection creates filename if it does not exist.

<&m

Duplicates standard input from file descriptor m (page 471).

[n] >&m

Duplicates standard output or file descriptor n if specified from file descriptor m (page 471).

[n]<&

Closes standard input or file descriptor n if specified (page 471).

[n] >&

Closes standard output or file descriptor n if specified.


Writing a Simple Shell Script

A shell script is a file that contains commands that the shell can execute. The commands in a shell script can be any commands you can enter in response to a shell prompt. For example, a command in a shell script might run a Linux utility, a compiled program, or another shell script. Like the commands you give on the command line, a command in a shell script can use ambiguous file references and can have its input or output redirected from or to a file or sent through a pipe (page 122). You can also use pipes and redirection with the input and output of the script itself.

In addition to the commands you would ordinarily use on the command line, control flow commands (also called control structures) find most of their use in shell scripts. This group of commands enables you to alter the order of execution of commands in a script just as you would alter the order of execution of statements using a structured programming language. Refer to "Control Structures" on page 436 (bash) and page 368 (tcsh) for specifics.

The shell interprets and executes the commands in a shell script, one after another. Thus a shell script enables you to simply and quickly initiate a complex series of tasks or a repetitive procedure.

chmod: Makes a File Executable

To execute a shell script by giving its name as a command, you must have permission to read and execute the file that contains the script (refer to "Access Permissions" on page 91). Read permission enables you to read the file that holds the script. Execute permission tells the shell and the system that the owner, group, and/or public has permission to execute the file; it implies that the content of the file is executable.

When you create a shell script using an editor, the file does not typically have its execute permission set. The following example shows a file named whoson that contains a shell script:

 $ cat whoson date echo "Users Currently Logged In" who $ whoson bash: ./whoson: Permission denied 

You cannot execute whoson by giving its name as a command because you do not have execute permission for the file. The shell does not recognize whoson as an executable file and issues an error message when you try to execute it. When you give the filename as an argument to bash (bash whoson), bash takes the argument to be a shell script and executes it. In this case bash is executable and whoson is an argument that bash executes so you do not need to have permission to execute whoson. You can do the same with tcsh script files.

tip: Command not found?

If you get the message

 $ whoson bash: whoson: command not found 

the shell is not set up to search for executable files in the working directory. Give this command instead:

 $ ./whoson 

The . / tells the shell explicitly to look for an executable file in the working directory. To change the environment so that the shell searches the working directory automatically, see page 284.


The chmod utility changes the access privileges associated with a file. Figure 8-1 shows ls with the l option displaying the access privileges of whoson before and after chmod gives execute permission to the file's owner.

Figure 8-1. Using chmod to make a shell script executable


The first ls displays a hyphen ( ) as the fourth character, indicating that the owner does not have permission to execute the file. Next chmod gives the owner execute permission: The u+x causes chmod to add (+) execute permission (x) for the owner (u). (The u stands for user, although it means the owner of the file who may be the user of the file at any given time.) The second argument is the name of the file. The second ls shows an x in the fourth position, indicating that the owner now has execute permission.

If other users will execute the file, you must also change group and/or public access permissions for the file. Any user must have execute access to use the file's name as a command. If the file is a shell script, the user trying to execute the file must also have read access to the file. You do not need read access to execute a binary executable (compiled program).

The final command in Figure 8-1 shows the shell executing the file when its name is given as a command. For more information refer to "Access Permissions" on page 91 and to ls and chmod in Part V.

#! Specifies a Shell

You can put a special sequence of characters on the first line of a file to tell the operating system which shell should execute the file. Because the operating system checks the initial characters of a program before attempting to exec it, these characters save the system from making an unsuccessful attempt. If # ! are the first two characters of a script, the system interprets the characters that follow as the absolute pathname of the utility that should execute the script. This can be the pathname of any program, not just a shell. The following example specifies that bash should run the script:

 $ cat bash_script #!/bin/bash echo "This is a Bourne Again Shell script." 

The #! characters are useful if you have a script that you want to run with a shell other than the shell you are running the script from. The following example shows a script that should be executed by tcsh:

 $ cat tcsh_script #!/bin/tcsh echo "This is a tcsh script." set person = jenny echo "person is $person" 

Because of the #! line, the operating system ensures that tcsh executes the script no matter which shell you run it from.

You can use ps f within a shell script to display the name of the shell that is executing the script. The three lines that ps displays in the following example show the process running the parent bash shell, the process running the tcsh script, and the process running the ps command:

 $ cat tcsh_script2 #!/bin/tcsh ps -f $ tcsh_script2 UID        PID  PPID  C STIME TTY          TIME CMD alex      3031  3030  0 Nov16 pts/4    00:00:00 -bash alex      9358  3031  0 21:13 pts/4    00:00:00 /bin/tcsh ./tcsh_script2 alex      9375  9358  0 21:13 pts/4    00:00:00 ps -f 

If you do not follow #! with the name of an executable program, the shell reports that it cannot find the command that you asked it to run. You can optionally follow #! with SPACEs. If you omit the #! line and try to run, for example, a tcsh script from bash, the shell may generate error messages or the script may not run properly.

See page 576 for an example of a stand-alone sed script that uses #!.

# Begins A Comment

Comments make shell scripts and all code easier to read and maintain by you and others. The comment syntax is common to both the Bourne Again and the TC Shells.

If a pound sign (#) in the first character position of the first line of a script is not immediately followed by an exclamation point (!) or if a pound sign occurs in any other location in a script, the shell interprets it as the beginning of a comment. The shell then ignores everything between the pound sign and the end of the line (the next NEWLINE character).

Running A Shell Script
fork and exec system calls

A command on the command line causes the shell to fork a new process, creating a duplicate of the shell process (a subshell). The new process attempts to exec (execute) the command. Like fork, the exec routine is executed by the operating system (a system call). If the command is a binary executable program, such as a compiled C program, exec succeeds and the system overlays the newly created subshell with the executable program. If the command is a shell script, exec fails. When exec fails, the command is assumed to be a shell script, and the subshell runs the commands in the script. Unlike a login shell, which expects input from the command line, the subshell takes its input from a file: the shell script.

As discussed earlier, if you have a shell script in a file that you do not have execute permission for, you can run the commands in the script by using a bash command to exec a shell to run the script directly. In the following example, bash creates a new shell that takes its input from the file named whoson:

 $ bash whoson 

Because the bash command expects to read a file containing commands, you do not need execute permission for whoson. (You do need read permission.) Even though bash reads and executes the commands in whoson, standard input, standard output, and standard error remain connected to the terminal.

Although you can use bash to execute a shell script, this technique causes the script to run more slowly than giving yourself execute permission and directly invoking the script. Users typically prefer to make the file executable and run the script by typing its name on the command line. It is also easier to type the name, and this practice is consistent with the way other kinds of programs are invoked (so you do not need to know whether you are running a shell script or another kind of program). However, if bash is not your interactive shell or if you want to see how the script runs with different shells, you may want to run a script as an argument to bash or tcsh.

caution: sh does not call the original Bourne Shell

The original Bourne Shell was invoked with the command sh. Although you can call bash with an sh command, it is not the original Bourne Shell. The sh command (/bin/sh) is a symbolic link to /bin/bash, so it is simply another name for the bash command. When you call bash using the command sh, bash tries to mimic the behavior of the original Bourne Shell as closely as possible. It does not always succeed.


Separating and Grouping Commands

Whether you give the shell commands interactively or write a shell script, you must separate commands from one another. This section, which applies to the Bourne Again and the TC Shells, reviews the ways to separate commands that were covered in Chapter 5 and introduces a few new ones.

; AND NEWLINE Separate Commands

The NEWLINE character is a unique command separator because it initiates execution of the command preceding it. You have seen this throughout this book each time you press the RETURN key at the end of a command line.

The semicolon ( ; ) is a command separator that does not initiate execution of a command and does not change any aspect of how the command functions. You can execute a series of commands sequentially by entering them on a single command line and separating each from the next with a semicolon (;). You initiate execution of the sequence of commands by pressing RETURN:

 $ x ; y ; z 

If x, y, and z are commands, the preceding command line yields the same results as the next three commands. The difference is that in the next example the shell issues a prompt after each of the commands (x, y, and z) finishes executing, whereas the preceding command line causes the shell to issue a prompt only after z is complete:

 $ x $ y $ z 

Whitespace

Although the whitespace around the semicolons in the earlier example makes the command line easier to read, it is not necessary. None of the command separators needs to be surrounded by SPACEs or TABs.

\ Continues a Command

When you enter a long command line and the cursor reaches the right side of the screen, you can use a backslash (\) character to continue the command on the next line. The backslash quotes, or escapes, the NEWLINE character that follows it so that the shell does not treat the NEWLINE as a command terminator. Enclosing a backslash within single quotation marks turns off the power of a backslash to quote special characters such as NEWLINE. Enclosing a backslash within double quotation marks has no effect on the power of the backslash.

Although you can break a line in the middle of a word (token), it is typically easier to break a line just before or after whitespace.

optional

You can enter a RETURN in the middle of a quoted string on a command line without using a backslash. The NEWLINE (RETURN) that you enter will then be part of the string:

 $ echo "Please enter the three values > required to complete the transaction." Please enter the three values required to complete the transaction. 

In the three examples in this section, the shell does not interpret RETURN as a command terminator because it occurs within a quoted string. The > is a secondary prompt indicating that the shell is waiting for you to continue the unfinished command. In the next example, the first RETURN is quoted (escaped) so the shell treats it as a separator and does not interpret it literally.

 $ echo "Please enter the three values \ > required to complete the transaction." Please enter the three values required to complete the transaction. 

Single quotation marks cause the shell to interpret a backslash literally:

 $ echo 'Please enter the three values \ > required to complete the transaction.' Please enter the three values \ required to complete the transaction. 


| AND & Separate Commands and Do Something Else

The pipe symbol ( | ) and the background task symbol (&) are also command separators. They do not start execution of a command but do change some aspect of how the command functions. The pipe symbol alters the source of standard input or the destination of standard output. The background task symbol causes the shell to execute the task in the background so you get a prompt immediately and can continue working on other tasks.

Each of the following command lines initiates a single job comprising three tasks:

 $ x | y | z $ ls -l | grep tmp | less 

In the first job, the shell redirects standard output of task x to standard input of task y and redirects y's standard output to z's standard input. Because it runs the entire job in the foreground, the shell does not display a prompt until task z runs to completion: Task z does not finish until task y finishes, and task y does not finish until task x finishes. In the second job, task x is an ls l command, task y is grep tmp, and task z is the pager less. The shell displays a long (wide) listing of the files in the working directory that contain the string tmp, piped through less.

The next command line executes tasks d and e in the background and task f in the foreground:

 $ d & e & f [1] 14271 [2] 14272 

The shell displays the job number between brackets and the PID (process identification) number for each process running in the background. You get a prompt as soon as f finishes, which may be before d or e finishes.

Before displaying a prompt for a new command, the shell checks whether any background jobs have completed. For each job that has completed, the shell displays its job number, the word Done, and the command line that invoked the job; then the shell displays a prompt. When the job numbers are listed, the number of the last job started is followed by a + character and the job number of the previous job is followed by a character. Any other jobs listed show a SPACE character. After running the last command, the shell displays the following before issuing a prompt:

 [1]-  Done                    d [2]+  Done                    e 

The next command line executes all three tasks as background jobs. You get a shell prompt immediately:

 $ d & e & f & [1] 14290 [2] 14291 [3] 14292 

You can use pipes to send the output from one task to the next task and an ampersand (&) to run the entire job as a background task. Again the prompt comes back immediately. The shell regards the commands joined by a pipe as being a single job. That is, it treats all pipes as single jobs, no matter how many tasks are connected with the pipe (|) symbol or how complex they are. The Bourne Again Shell shows only one process placed in the background:

 $ d | e | f & [1] 14295 

The TC Shell shows three processes (all belonging to job 1) placed in the background:

 tcsh $ d | e | f & [1] 14302 14304 14306 

optional: ( ) Groups Commands

You can use parentheses to group commands. The shell creates a copy of itself, called a subshell, for each group. It treats each group of commands as a job and creates a new process to execute each command (refer to "Process Structure" on page 293 for more information on creating subshells). Each subshell (job) has its own environment, meaning that it has its own set of variables with values that can differ from those of other subshells.

The following command line executes commands a and b sequentially in the background while executing c in the background. The shell prompt returns immediately.

 $ (a ; b) & c & [1] 15520 [2] 15521 

The preceding example differs from the earlier example d & e & f & in that tasks a and b are initiated sequentially, not concurrently.

Similarly the following command line executes a and b sequentially in the background and, at the same time, executes c and d sequentially in the background. The subshell running a and b and the subshell running c and d run concurrently. The prompt returns immediately.

 $ (a ; b) & (c ; d) & [1] 15528 [2] 15529 

The next script copies one directory to another. The second pair of parentheses creates a subshell to run the commands following the pipe. Because of these parentheses,the output of the first tar command is available for the second tar command despite the intervening cd command. Without the parentheses, the output of the first tar command would be sent to cd and lost because cd does not process input from standard input. The shell variables $1 and $2 represent the first and second command line arguments (page 481), respectively. The first pair of parentheses, which creates a subshell to run the first two commands, allows users to call cpdir with relative pathnames. Without them the first cd command would change the working directory of the script (and consequently the working directory of the second cd command). With them only the working directory of the subshell is changed.

 $ cat cpdir (cd $1 ; tar -cf - . ) | (cd $2 ; tar -xvf - ) $ cpdir /home/alex/sources /home/alex/memo/biblio 

The cpdir command line copies the files and directories in the /home/alex/sources directory to the directory named /home/alex/memo/biblio. This shell script is almost the same as using cp with the r option. Refer to Part V for more information on cp (page 616) and tar (page 786).


Job Control

A job is a command pipeline. You run a simple job whenever you give Linux a command. For example, type date on the command line and press RETURN: You have run a job. You can also create several jobs with multiple commands on a single command line:

 $ find . -print | sort | lpr & grep -l alex /tmp/* > alexfiles & [1] 18839 [2] 18876 

The portion of the command line up to the first & is one job consisting of three processes connected by pipes: find (page 655), sort (page 50), and lpr (page 47). The second job is a single process running grep. Both jobs have been put into the background by the trailing & characters, so bash does not wait for them to complete before displaying a prompt.

Using job control you can move commands from the foreground to the background (and vice versa), stop commands temporarily, and list all the commands that are running in the background or stopped.

jobs: Lists Jobs

The jobs builtin lists all background jobs. The following sequence demonstrates what happens when you give a jobs command. Here the sleep command runs in the background and creates a background job that jobs reports on:

 $ sleep 60 & [1] 7809 $ jobs [1] + Running                     sleep 60 & 

fg: Brings a Job to the Foreground

The shell assigns job numbers to commands you run in the background (page 269). Several jobs are started in the background in the next example. For each job the shell lists the job number and PID number immediately, just before it issues a prompt.

 $ xclock & [1] 1246 $ date & [2] 1247 $ Sun Dec 4 11:44:40 PST 2005 [2]+ Done          date $ find /usr -name ace -print > findout & [2] 1269 $ jobs [1]- Running         xclock & [2]+ Running         find /usr -name ace -print > findout & 

Job numbers, which are discarded when a job is finished, can be reused. When you start or put a job in the background, the shell assigns a job number that is one more than the highest job number in use.

In the preceding example, the jobs command lists the first job, xclock, as job 1. The date command does not appear in the jobs list because it finished before jobs was run. Because the date command was completed before find was run, the find command became job 2.

To move a background job into the foreground, use the fg builtin followed by the job number. Alternatively, you can give a percent sign (%) followed immediately by the job number as a command. Either of the following commands moves job 2 into the foreground:

 $ fg 2 

or

 $ %2 

You can also refer to a job by following the percent sign with a string that uniquely identifies the beginning of the command line used to start the job. Instead of the preceding command, you could have used either fg %find or fg %f because both uniquely identify job 2. If you follow the percent sign with a question mark and a string, the string can match any part of the command line. In the preceding example, fg %?ace also brings job 2 into the foreground.

Often the job you wish to bring into the foreground is the only job running in the background or is the job that jobs lists with a plus (+). In these cases you can use fg without an argument.

bg: Sends a Job to the Background

To move the foreground job to the background, you must first suspend (temporarily stop) the job by pressing the suspend key (usually CONTROL-Z). Pressing the suspend key immediately suspends the job in the foreground. You can then use the bg builtin to resume execution of the job in the background.

 $ bg 

If a background job attempts to read from the terminal, the shell stops it and notifies you that the job has been stopped and is waiting for input. You must then move the job into the foreground so that it can read from the terminal. The shell displays the command line when it moves the job into the foreground.

 $ (sleep 5; cat > mytext) & [1] 1343 $ date Sun Dec  4 11:58:20 PST 2005 [1]+ Stopped                   ( sleep 5; cat >mytext ) $ fg ( sleep 5; cat >mytext ) Remember to let the cat out! CONTROL-D $ 

In the preceding example, the shell displays the job number and PID number of the background job as soon as it starts, followed by a prompt. Demonstrating that you can give a command at this point, the user gives the command date and its output appears on the screen. The shell waits until just before it issues a prompt (after date has finished) to notify you that job 1 is stopped. When you give an fg command, the shell puts the job in the foreground and you can enter the input that the command is waiting for. In this case the input needs to be terminated with a CONTROL-D to signify EOF (end of file). The shell then displays another prompt.

The shell keeps you informed about changes in the status of a job, notifying you when a background job starts, completes, or is stopped, perhaps waiting for input from the terminal. The shell also lets you know when a foreground job is suspended. Because notices about a job being run in the background can disrupt your work, the shell delays displaying these notices until just before it displays a prompt. You can set notify (page 321) to make the shell display these notices without delay.

If you try to exit from a shell while jobs are stopped, the shell issues a warning and does not allow you to exit. If you then use jobs to review the list of jobs or you immediately try to leave the shell again, the shell allows you to leave and terminates the stopped jobs. Jobs that are running (not stopped) in the background continue to run. In the following example, find (job 1) continues to run after the second exit terminates the shell, but cat (job 2) is terminated:

 $ find / -size +100k > $HOME/bigfiles 2>&1 & [1] 1426 $ cat > mytest & [2] 1428 $ exit exit There are stopped jobs. $ exit exit login: 

Manipulating the Directory Stack

Both the Bourne Again and the TC Shells allow you to store a list of directories you are working with, enabling you to move easily among them. This list is referred to as a stack. It is analogous to a stack of dinner plates: You typically add plates to and remove plates from the top of the stack, creating a first-in last-out, (FILO) stack.

dirs: Displays the Stack

The dirs builtin displays the contents of the directory stack. If you call dirs when the directory stack is empty, it displays the name of the working directory:

 $ dirs ~/literature 

The dirs builtin uses a tilde (~) to represent the name of the home directory. The examples in the next several sections assume that you are referring to the directory structure shown in Figure 8-2.

Figure 8-2. The directory structure in the examples


pushd: Pushes a Directory on the Stack

To change directories and at the same time add a new directory to the top of the stack, use the pushd (push directory) builtin. In addition to changing directories, the pushd builtin displays the contents of the stack. The following example is illustrated in Figure 8-3:

 $ pushd ../demo ~/demo ~/literature $ pwd /home/sam/demo $ pushd ../names ~/names ~/demo ~/literature $ pwd /home/sam/names 

Figure 8-3. Creating a directory stack


When you use pushd without an argument, it swaps the top two directories on the stack and makes the new top directory (which was the second directory) become the new working directory (Figure 8-4):

 $ pushd ~/demo ~/names ~/literature $ pwd /home/sam/demo 

Figure 8-4. Using pushd to change working directories


Using pushd in this way, you can easily move back and forth between two directories. You can also use cd to change to the previous directory, whether or not you have explicitly created a directory stack. To access another directory in the stack, call pushd with a numeric argument preceded by a plus sign. The directories in the stack are numbered starting with the top directory, which is number 0. The following pushd command continues with the previous example, changing the working directory to literature and moving literature to the top of the stack:

 $ pushd +2 ~/literature ~/demo ~/names $ pwd /home/sam/literature 

popd: Pops a Directory Off the Stack

To remove a directory from the stack, use the popd (pop directory) builtin. As the following example and Figure 8-5 show, popd used without an argument removes the top directory from the stack and changes the working directory to the new top directory:

 $ dirs ~/literature ~/demo ~/names $ popd ~/demo ~/names $ pwd /home/sam/demo 

Figure 8-5. Using popd to remove a directory from the stack


To remove a directory other than the top one from the stack, use popd with a numeric argument preceded by a plus sign. The following example removes directory number 1, demo:

 $ dirs ~/literature ~/demo ~/names $ popd +1 ~/literature ~/names 

Removing a directory other than directory number 0 does not change the working directory.

     < Day Day Up > 


    A Practical Guide to LinuxR Commands, Editors, and Shell Programming
    A Practical Guide to LinuxR Commands, Editors, and Shell Programming
    ISBN: 131478230
    EAN: N/A
    Year: 2005
    Pages: 213

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net