Test Steps

The following audit steps are divided into five sections:

  • Account management and password controls

  • File security and controls

  • Network security and controls

  • Audit logs

  • Security monitoring and other controls


The test steps in this chapter focus on testing the logical security of Unix and Linux boxes, as well as processes for maintaining and monitoring that security. However, there are other internal controls that are critical to the overall operations of a Unix/Linux environment, such as physical security, disaster recovery planning, backup processes, change management, capacity planning, and system monitoring. These topics are covered in Chapter 4 and should be included in your audit of the Unix/Linux environment if they have not already been covered effectively in a separate data center or entity-level IT controls audit.

Account Management and Password Controls


Most of the steps in this section require some form of testing over the system's password file. Prior to commencing work on these steps, the auditor should determine whether the system is using only its local password file (/etc/passwd) or it is also using some form of centralized account management such as NIS or LDAP. If the latter is true, then the auditor must execute the following steps on both the centralized password file and the local password file. The same concept applies for the steps that reference the group file.

In the "How" sections of the following steps, we will not attempt to specify the commands for every possible centralized account management system because there are a number of vendor-specific tools. We will include the details for pulling information from NIS, which is the most common of these systems, as an example. If your company uses a different tool, such as NIS+ or LDAP, you will need to work with your system administrator and review the documentation for these systems to determine the equivalent commands. However, the concepts described below for the local and NIS password and group files will apply.

1 Review and evaluate procedures for creating Unix or Linux user accounts and ensuring that accounts are created only when there's a legitimate business need. Also, review and evaluate processes for ensuring that accounts are removed or disabled in a timely fashion in the event of termination or job change.

If effective controls are not in place for providing and removing access to the server, it could result in unnecessary access to system resources. This, in turn, places the integrity and the availability of the server at risk.


Interview the system administrators, and review account-creation procedures. This process should include some form of verification that the user has a legitimate need for access. Take a sample of accounts from the password file, and review evidence that they were approved properly prior to being created. Alternatively, take a sample of accounts from the password file, and validate their legitimacy by investigating and understanding the job function of the account owners.

Also review the process for removing accounts when access is no longer needed. This process could include an automated feed from the company's human resources (HR) system providing information on terminations and job changes. Or the process could include a periodic review and validation of active accounts by the system administrators and/or other knowledgeable managers. Obtain a sample of accounts from the password file, and verify that they are owned by active employees and that those employees' job positions have not changed since the account's creation.

2 Ensure that all userIDs in the password file(s) are unique.

If two users have the same userID (UID), they can fully access each other's files and can "kill" each other's processes. This is true even if they have different usernames. The UID is what the operating system uses to identify the user. It merely maps the username to the corresponding UID in the password file.


For local accounts, perform the command more /etc/passwd, and review the entries to ensure that there are no duplicate UIDs. If NIS is used, the command ypcat passwd also should be used so that NIS UIDs can be examined.

The following command will list any duplicate UIDs found in the local password file:

 cat /etc/passwd | awk -F: ‘{print $3}’ | uniq -d 

3 Ensure that passwords are shadowed and use strong hashes where possible.

In order for the system to function appropriately, the password file needs to be world readable. This means that if the encrypted passwords are contained within the file, every user on the system will have access to them. This, in turn, provides the ability for a user to copy the encrypted passwords and attempt to crack them via password-cracking tools that are freely available on the Internet. Given enough time, a brute-force cracking tool can guess even the most effective password. Consideration also should be given to the form of the passwords. The crypt routine traditionally used for Unix passwords is a relatively weak form of encryption by today's standards, and the maximum effective password length is eight characters. A better choice is to use MD5 hashes, which are harder to crack and allow more than eight characters for the password.


To determine whether a shadow password file is being used, perform a more /etc/ passwd command in order to view the file. Look within the password field for all accounts. If each account has an "*" or "x" or some other common character in it, the system uses a shadow password file. The shadow password file will be located at /etc/ shadow for most systems. Systems using NIS create some special problems that make the use of shadowed passwords more difficult, and older systems cannot shadow these passwords at all. If NIS is used in your environment, consult with the system administrator to discuss the possibilities of shadowing these passwords. If it is not possible to do so, then other password-related policies will need to be considered much more important.

MD5 is now the default hash on many Linux systems. The crypt form can be recognized because it is always 13 characters long; an MD5 hash in /etc/passwd or /etc/shadow will be prepended with the characters $1$ and is longer.

4 Evaluate the file permissions for the password and shadow password files.

If a user can alter the contents of these files, he or she will be able to add and delete users, change user passwords, or make himself or herself a superuser by changing his or her UID to 0. If a user can read the contents of the shadow password file, he or she will be able to copy the encrypted passwords and attempt to crack them.


View the file permissions for these files by performing the ls -l command on them. The /etc/passwd file should be writable only by "root," and the /etc/shadow file also should be readable only by "root."

5 Review and evaluate the strength of system passwords.

If passwords on the system are easy to guess, it is more likely that an attacker will be able to break into that account, thus obtaining unauthorized access to the system and its resources.


Review system settings that provide password composition controls. For Solaris systems, the password policy is usually set in /etc/default/passwd. Perform a more command on this file, and view the PASSLENGTH parameter in order to determine minimum password length. Compare the value of this parameter with your company's IT security policy. Most Linux systems have /etc/login.defs, which provides basic controls such as minimum password length and maximum password age for locally created accounts.

Unfortunately, the standard Unix passwd program does not provide strong capabilities for preventing weak passwords. It will prevent a user from choosing his or her username as a password but not much else. Through discussions with the system administrator, determine whether other tools have been implemented to either replace or enhance the native passwd functionality for password composition requirements. One stronger possibility is npasswd, a replacement for passwd. Npasswd is currently hosted at http://www.utexas.edu/cc/Unix/software/npasswd/. Additional controls also can be provided through PAM by the use of pam_cracklib, pam_passwdqc, or a similar module (pam_cracklib is included in many Linux distributions). Look for lines beginning with password in /etc/pam.conf or the configuration files in /etc/pam.d/ to get an idea of what's in use on the system you're auditing. Perform a more command on these files to view their contents.

Consider obtaining a copy of the password file and the shadow password file and executing a password-cracking tool against the encrypted passwords in order to identify weak passwords. See the "Tools and Technology" section later in this chapter for information on potential password-cracking tools. Judgment should be used in interpreting the results because a brute-force cracking tool eventually will crack any password if given enough time. If the password files have been shadowed, then you really only need to worry about truly weak passwords that are obvious and easy to guess. These sorts of passwords likely will be guessed within the first 30 to 60 minutes when running a password-cracking program.

On the other hand, if the password files have not been shadowed, you likely will want to run the program for much longer because anyone with access to the system will have the ability to do the same thing.

6 Evaluate the use of password controls such as aging.

It is important to change passwords periodically for two primary reasons. First, without aging, an attacker with a copy of the encrypted or hashed passwords will have an unlimited amount of time to perform an offline brute-force cracking attack. Second, someone who already has unauthorized access (though cracking or just password sharing) will be able to retain that access indefinitely.


Review system settings that provide password aging controls. For Solaris systems, the password policy is usually set in /etc/default/passwd. Perform a more command on this file and view the MAXWEEKS parameter in order to determine the maximum age for passwords and the MINWEEKS parameter in order to determine the minimum age for passwords. Minimum age is important to prevent a user from changing his or her password and then immediately changing it back to its previous value. View the settings of these parameters and compare them with your company's IT security policy.

Most Linux systems have /etc/login.defs, which provides basic controls such as minimum password length and maximum password age for locally created accounts. Additional controls can be provided through PAM by the use of pam_cracklib, pam_passwdqc, or a similar module (pam_cracklib is included in many Linux distributions). Look for lines beginning with password in /etc/pam.conf or the configuration files in /etc/pam.d/ to get an idea of what's in use on the system you're auditing. Perform a more command on these files to view their contents.

The "root" account generally will not be subject to automatic aging in order to prevent the possibility of the account being locked. However, there still should be a manual process for periodically changing the password in accordance with company policy. Review the process for changing this password, and look for evidence that this process is being followed.

7 Review the process used by the system administrator(s) for setting initial passwords for new users and communicating those passwords.

When new user accounts are created, the system administrator must assign an initial password to that user. If that password is easy to guess, it could allow the account to be hacked, resulting in unauthorized access to the server and its resources. If the initial password is not communicated via a secure channel, it could allow others to view the password and obtain unauthorized access to the account.


Interview the system administrator, and review documentation to understand the mechanism used for creating initial passwords. Ensure that this mechanism results in passwords that are difficult to guess and comply with your company's IT security policy.

Also, review the channels used for communicating the new passwords to users. Ensure that unencrypted transmissions are not used. Finally, it is often a good idea for the user to be required to change his or her password immediately on first login. Interview the system administrator to determine whether or not this is done. Accounts can be expired, thus forcing the user to change his or her password on the next login by the use of passwd -f on Solaris and passwd -e on Linux. These commands will expire a user's account immediately, forcing the user to change it on the next login. These are not items that can really be checked for, other than asking the system administrator how he or she does things.

8 Ensure that each account is associated with and can be traced easily to a specific employee.

If the owner of an account is not readily apparent, it will impede forensic investigations regarding inappropriate actions performed by that account. If multiple people use an account, there is no accountability for actions performed by that account.


Review the contents of the password file(s). The owner of each account should be obvious, with the user's name or other unique identifier (such as employee number) either used as the username or placed in the GECOS field. Question any accounts that seem to be shared, such as guest accounts. If accounts such as these are required, they should be configured with restricted shells and/or such that a user cannot directly log into them (thus requiring the user to log in as himself or herself first and then using su or sudo to access the shared account, creating an audit trail).

9 Ensure that invalid shells have been placed on all disabled accounts.

This is only a significant risk if trusted access is allowed (see the "Network Security and Controls" section). If trusted access is allowed, this means that a user with a certain username on one system (the trusted system) can log into an account with that same username on another system (the trusting system) without entering a password. This can be done as long as the user account on the trusting system has a valid shell defined to it, even though the account may have been disabled. Therefore, if a system administrator disables an account but leaves it with a valid shell, a user on a remote, trusted system with the same username still could access that account.


View the contents of the password files (via the more command). If an account has been disabled, it will have an "*," "*LK*," or something similar in the password field (remember to look in the shadow password file if it is being used). For those accounts, review the contents of the shell field. If it has anything besides /dev/null, /bin/false, or something similar, the account probably still can access a valid shell or program.

10 Review and evaluate access to superuser (root-level) access.

An account with root-level access has the ability to do anything with the system, including deleting all files and shutting the system down. Access to this ability should be minimized.


Review the contents of the password files, and identify all accounts with a UID of 0. Question the need for any account besides "root" to have a UID of 0. Determine via interviews who knows the passwords to these accounts, and evaluate the appropriateness of this list. If sudo is used, review the /etc/sudoers file to evaluate the ability of users to run commands as "root" with the sudo command. The sudo tool can be used to grant specific users the ability to run specific commands as if they were "root." This is generally preferable to giving users full root access.

The basic format of an entry in the sudoers file would look something like this:

 Andrew     ALL=(root) /usr/bin/cat Micah        ALL=(ALL) ALL

In this example, user Andrew would be allowed to run the command /usr/bin/cat as the user root on all systems, and user Micah would be allowed to run any command as any user on any system. There are, of course, many other options that will not be covered here. Consult the man page for sudoers for more information.

11 Review and evaluate the usage of groups and determine the restrictiveness of their usage.

This information will provide a foundation for evaluating file permissions in later steps. If all users are placed in one or two large groups, then group file permissions are not very useful. For example, if all users are part of one large group, then a file that allows group "write" permissions effectively allows world "write" permissions. However, if users are placed in selective, well-thought-out groups, group file permissions are effective controls.


Review the contents of the /etc/group, /etc/passwd, and related centralized files (e.g., NIS) using the more (e.g., more /etc/passwd) and, for NIS, ypcat (e.g., ypcat passwd and ypcat group) commands.

Look at the password as well as the group files to get an idea of group assignments because user primary group assignments from the password file do not need to be relisted in the group file. In other words, if a user is assigned to the "users" group in the /etc/passwd file, there is no need to list him or her as a member of that group in the /etc/group file. Therefore, to obtain a full listing of all members of the "users" group, you must determine who was assigned to that group in the /etc/group file and also determine who was assigned to that group in the /etc/passwd file (along with any NIS, LDAP, etc. equivalents being used in your environment). It is important to note that a group does not need to be listed in the group file in order to exist. It is therefore necessary to identify all group IDs (GIDs) in the password file and determine the membership of those groups. If you rely on the group file to identify all groups on the system, you may not receive a complete picture.

12 Evaluate usage of passwords at the group level.

Group-level passwords allow people to become members of groups with which they are not associated. If a group has a password associated with it in the group file, then a user can use the newgrp <group name> command and will be prompted to enter that group's password. Once the password is entered correctly, the user will be given the rights and privileges of a member of that group for the duration of the session. There generally is little need for this functionality because users generally are just granted membership to whichever groups they need to access. Creating a group-level password just creates another vector of attack on the system by creating the opportunity for users to hack the group-level passwords and escalate their privileges.


Review the contents of the group file(s) by using more /etc/group for the local file and ypcat group for NIS. If the groups have anything besides a common character (such as an "*" or even nothing) in the password field (the second field for each entry), passwords are being used. If they are, speak to the system administrators to understand the purpose and value of using these group-level passwords, and review the process for restricting knowledge of these passwords.

To look for passwords in /etc/group, you could use this command in your audit script:

awk -F: '{if($2!="" && $2!="x" && $2!="*")print "A password is set for group "$1" in /etc/group\n"}' /etc/group

13 Review and evaluate the security of directories in the default path used by the system administrator when adding new users. Evaluate the usage of the "current directory" in the path.

A user's path contains a set of directories that are to be searched each time the user issues a command without typing the full pathname. For example, let's say the ls command on your system is located at /bin/ls. In order to execute this program and view the permissions in the /home directory, you could type /bin/ls /home. By typing in the exact location of the file, you are using the full pathname. However, we rarely do this. Instead, the norm is to type ls /home. In this case, the user's path is the mechanism for finding the file that is to be executed.

For example, let's say that your path looks like this:


This means that when you type in a command, the operating system will first look for a file by that name in /usr/bin. If the file doesn't exist there, it will next look in /usr/local/bin. If it still doesn't find a file by that name there, it will look in /bin. If it is still unsuccessful, then the command will fail. Thus, in our example, we have attempted to execute the ls command, which is located in /bin. The system will first see if there is a file called ls in the /usr/bin directory. Since there is not, it will look in the /usr/local/bin directory. Since the file is not there either, it will look in /bin. There is a file called ls in /bin on our system, so it will attempt to execute that file. If the permissions on that file grant you execute permissions, you will be allowed to run the program.

Attackers who can write to a directory in a user's path can perform file name spoofing. For example, if the directory that contains the ls command is not secured, an attacker could replace the ls command with his or her own version. Alternatively, if the "current directory" (meaning whatever directory the user happens to be sitting in at the time the command is executed) or another unprotected directory is placed early in the user's path, the attacker could place his or her own version of the ls command in one of these and never have to touch the real ls command.

Because of all this, directories in the path should be user- or system-owned and should not be writable by the group or world.

A "." or an empty entry (a space) represents the "current directory," which means whatever directory the user happens to be sitting in at the time he or she executes a command. Since this is an unknown, it is generally safer to leave this out of the path. Otherwise, an attacker could trick a user or administrator into switching to a specific directory and then executing a common command, a malicious version of which could be located in that directory.

Each user has the ability to set his or her path in his or her initialization files. However, most users will never touch their path, and it is important for the system administrator to provide a default path that is secure.


The easiest way to view your own path is by typing echo $PATH at the command line. The default setting for users' paths may be found in /etc/default/login, /etc/ profile, or one of the files in /etc/skel. Ask the system administrator where the default setting is kept if you are unsure. If the user has modified his or her path, this typically will be done in one of the dot-files in the home directory. Look at the contents of such files as .login, .profile, .cshrc, .bash_login, etc. A quick way to look is to use the command grep "PATH=" .* in the user's home directory. A user's home directory can be determined by viewing his or her entry in the password file.

Once you know the name of the file that contains the path, view the contents of the file using the more command. The ls -ld command then can be performed on each directory in the path in order to view directory permissions. The directories should be writable only by the user and system accounts. Group and world write access should not be allowed (unless the group contains only system-level accounts).

14 Review and evaluate the security of directories in root's path. Evaluate the usage of the "current directory" in the path.

If a user can write to a directory in root's path, it is possible that he or she could perform file name spoofing and obtain access to the root account. See the immediately preceding step for further explanation of this concept.


Have the system administrator display root's path for you (using the echo $PATH command when logged in as root), and then review the permissions of each directory using the ls -ld command. All directories in root's path should be system-owned and should not be group or world writable (unless the group contains only system-level accounts such as bin and sys). The "current directory" generally should not be part of root's path.

The following will print the permissions of root's path (assuming that the script is executed as root) and warn if there is a "." in the path or if one of the directories is world writable:

 #!/bin/sh for i in 'echo $PATH | sed 's/:/\n/g' do if [ "$i" = . ] then echo -e "WARNING: PATH contains "."\n" else ls -ld $I ls -ld $i | awk '{if(substr($1,9,1)=="w")print "\nWARNING - "$9" in root't path is world writable"}' fi done 

15 Review and evaluate the security of user home directories and configuration files. They generally should be writable only by the owner.

User config files basically are defined as any file located in the user's home directory that starts with a dot (.). These are commonly referred to as the user's dot-files. These files define the user's environment, and if a third party can modify them, this provides the ability to obtain privileged access to the account. For example, when a user first logs in, commands within his or her .login, .profile, .bashrc, etc. (depending on the shell) are executed. If an attacker is able to modify one of these files, he or she can insert whatever arbitrary commands he or she wishes, and the user will execute those commands the next time he or she logs in. For example, commands could be executed that copy the user's shell to another file and make it Set UID (SUID). The attacker then would be able to execute this new file and "become" that user. Access to these files also provides the ability to change the user's path or create malicious aliases for common commands by modifying these files. Other config files, such as .cshrc and .kshrc are executed at login, when a new shell is run, or when someone uses the su command to switch to the user's account. The ability to insert arbitrary commands into these files results in a similar risk as with the .login and .profile files.

Another config file that should be locked down is the .rhosts file. This file provides trusted access (access without the use of a password) to the user's account from specific accounts on specific other systems. A person who can modify this file can provide himself or herself with trusted access to the user's account.

Even though specific risks were not mentioned earlier for other dot-files, it is generally a good idea to keep them locked down. There is generally no legitimate reason that others should be modifying a user's config files.

Access to a user's home directory also should be locked down. If an attacker has write privileges to the directory, he or she will have the ability to delete any of the user's config files and replace them with his or her own versions.


The location of user home directories can be obtained from the account entries in the password file. The ls -ld command should be performed on each directory in order to view directory permissions. The ls -al command should be performed on each directory in order to view the permission on the files (including the config files) within the directory.

File Security and Controls

16. Evaluate the file permissions for a judgmental sample of critical files and their related directories.

If critical files are not protected properly, then the data within these files can be changed or deleted by inappropriate users. This can result in system disruption or unauthorized disclosure and alteration of proprietary information.


Using the ls -l command, examine the permissions on critical system files and their related directories. Generally, the most critical files within the Unix and Linux operating systems are those contained in the following directories:

  • /bin, /usr/bin, /sbin, /usr/sbin, and/or /usr/local/bin (programs that interpret commands and control such things as changing passwords)

  • /etc (files that contain such information as passwords, group memberships, and trusted hosts and files that control the execution of various daemons)

  • /usr or /var (contain various accounting logs)

Question the need for write access on these directories and the files contained therein to be granted to anyone besides system administration personnel.

In addition, there likely will be other critical data files (such as files containing key application data and company proprietary information) on the system you are auditing that should be secured. Interview the system administrator to help identify these.

For ease of use and in order to get a full picture of the file system, it may be best to have the system administrator run the ls -alR command against the entire file system and place the results in a file for you. You then can view the contents of this file in performing this and other steps. The system administrator must do this because the superuser is the only user who can access the contents of all directories.

There are several variations of what you might want to look for short of a full ls -alR. If, for example, you want to find all world-writable files (but excluding symlinks), use find / -perm +o=w ! -type l -print. Remember that man is your friend, and check the man pages to get more ideas on how you can use that command in your audit.

17 Look for open directories (directories with permission set to drwxrwxrwx) on the system and determine whether they should have the sticky bit set.

If a directory is open, anyone can delete files within the directory and replace them with their own files of the same name. This is sometimes appropriate for /tmp directories and other repositories for noncritical, transitory data; however, it is not advisable for most directories. By placing the sticky bit on the directory (setting permissions to drwxrwxrwt), only the owner of a file can delete it.


Examine directory permissions within the recursive file listing obtained from the preceding step, and search for open directories (in the listing of ls -alR, note that the directory permissions will be listed next to the "."). To find just directories with world-write permissions, you can use the command find / -type d -perm +o=w. For any such directories discovered, discuss the function of those directories with the system administrator, and determine the appropriateness of the open permissions.

18 Evaluate the security of all SUID files on the system, especially those that are SUID to "root."

SUID files allow users to execute them under the privileges of another UID. In other words, while that file is being executed, the operating system "pretends" that the user executing it has the privileges of the UID that owns the file. For example, every user needs the ability to update the password file in order to change passwords periodically. However, it would not be wise to set the file permissions of the password file to allow world-write access because doing so would give every user the ability to add, change, and delete accounts. Therefore, the passwd command was created in order to allow users to update their passwords without the ability to alter the rest of the password file. The passwd file is owned by "root" and has the SUID bit set (-rwsr-xr-x), meaning that when users execute it, they do so using the privileges of "root."

As can be seen, if an SUID file is writable by someone other than the owner, it may be possible for the owning account to be compromised. Other users could change the program being run to execute arbitrary commands under the file owner's UID. For example, a command could be inserted such that the owner's shell is copied to a file and made to be SUID. Then, when the attacker executed this copied shell, it would run as if it were the owner of the SUID file, allowing the attacker to execute any command using the privilege level of the captured account.


For Solaris and Linux, a full list of SUID files can be viewed by using the following command:

find / -perm -u+s

Note that the results of this command will not be complete unless it is run by someone with superuser access.

Review the file permissions for those programs, particularly for those that are SUID to root. They should be writable only by the owner.

Also question the need for any programs that are SUID to a user account. There should be little reason for one user to run a program as if he or she were another user. Most SUID programs are SUID to root or some other system or application account. If you see a program that is SUID to a user account, it is possible that this program is being used to capture that user's account.

19 Review and evaluate security over the kernel.

The kernel is the core of the operating system. If it can be altered or deleted, an attacker could destroy the entire system.


Perform the ls -l command on the location of the kernel for the system you are auditing. It should be owned and only writable by the superuser. There are a number of possible locations for the kernel. Some common kernel names are /unix (AIX), /stand/vmunix (HP), /vmunix (Tru64), /kernel/genunix (Solaris), and /boot/vmlinuz (Linux). Ask the system administrator for the location of the kernel on the system you are auditing.

20 Ensure that all files have a legal owner in the /etc/passwd file.

Each time a file is created, it is assigned an owner. If that owning account is subsequently deleted, the UID of that account still will be listed as the owner of the file unless ownership is transferred to a valid account. If another account is created later with that same UID, the owner of that account will, by definition, be given ownership of those files. For example, let's say that Grant (UID 226) creates the file /grant/file. UID 226 (Grant) is listed as the owner of this file. Grant then is fired, and his account is deleted. However, ownership of his file is not transferred. The operating system still considers UID 226 to be the owner of that file even though that UID no longer maps to a user in the password file. A few months later, Kate is hired and is assigned UID 226. The system now considers Kate to be the owner of the file /grant/file, and she has full privileges over it. If /grant/ file contains highly sensitive information, this could be a problem. In order to avoid this problem, before deleting an account, the system administrators should disposition all files owned by that account, either by deleting them or by transferring ownership.


Have the system administrator perform the quot command (which has to be run by the superuser). This command will show all file owners on the system. Review this list, and ensure that a username, and not a UID, is shown for every entry. If a UID appears, it means that there is no entry in the password file for that UID, which means that the password file could not convert the UID into a username. If a user is added later to the password file with that UID, that user would have ownership of these files.


The quot command is not available on all versions of Unix and Linux. If this is the case, the output of a ls -alR command will need to be reviewed manually to see if any files do not list a valid username as the owner.

21 Ensure that the chown command cannot be used by users to compromise user accounts.

The chown command allows users to transfer ownership of their files to someone else. If a user can transfer an SUID file to another user, he or she then will be able to execute that file and "become" the user. For example, if a user copies his or her shell, makes it SUID and world executable, and then transfers ownership to "root," then, by executing that file, the user becomes "root."


Many versions of Unix only allow the superuser to execute chown. Many others do not allow SUID bits to be transferred to another user. In order to determine whether these controls are in place on the machine you are auditing, perform the following in order:

  1. Review the password file and determine where your shell is located (it probably will be something like /bin/csh or /usr/bin/sh).

  2. cp <shell file name> ~/myshell. This will create a copy of your shell file in your home directory.

  3. chmod 4777 ~/myshell. This will make your new shell file SUID and world executable.

  4. Choose another user from the password file to transfer ownership to, preferably a fellow auditor.

  5. chown <new owner name> ~/myshell. This will attempt to transfer ownership of the file to another user.

  6. ls -l ~/myshell. This will let you see whether you transferred ownership successfully and, if so, whether the SUID bit also transferred.

  7. If the SUID bit transferred to another owner, execute the file by typing ~/myshell. This will execute the shell.

  8. whoami. This should show that you are now the other user and have taken over his or her account.

  9. If this happens, the system administrator will need to contact his or her vendor for a fix.

22 Obtain and evaluate the default umask value for the server.

The umask determines what permissions new files and directories will have by default. If the default umask is not set properly, users could inadvertently be giving group and/or world access to their files and directories. The default should be for files to be created securely. Privileges then can be loosened based on need and conscious decisions by the users (as opposed to their being unaware that their new files and directories are not secure).


The default may be set in /etc/profile or in one of the files in /etc/skel. However, the easiest test is often just to view the umask value for your own account because this usually will be a representation of the default value for all new users. This can be done by using the umask command.

The umask basically subtracts privileges when files and directories are created using the modular format of file permissions and assuming that the default is for all files and directories to be created fully open (777 permissions). In other words, with a umask of 000, all new files and directories will be created with default permissions of 777 (777 minus 000), meaning full access for the owner, group, and world.

For example, if the umask is set to 027, it will result in the following default permissions for newly created files and directories:

Normal default


Minus the umask


Default permissions on this server


This provides full access to the owner, read and execute access to the group, and no access to the world.

At a minimum, the default system generally should be set to a value of 027 (group write and all world access removed) or 037 (group write/execute and all world access removed).

23 Examine the system's crontabs, especially root's, for unusual or suspicious entries.

A cron executes a program at a preset time. It is basically the Unix or Linux system's native way of letting you schedule jobs. The crontab (short for cron table) contains all the crons scheduled on the system. Crons can be used to create time bombs or to compromise the owning account. For example, if an attacker managed to compromise a user's account, he or she could set up a cron that would, nightly, copy the user's shell and make it SUID and then delete this copy of the shell 15 minutes later. The attacker then could regain access to the account daily during that time period, but security-monitoring tools would not detect it unless the tools happened to run in that 15-minute window. An example of a time bomb would be in a case where a system administrator is fired or quits and schedules a cron to run 6 months later that crashes the system.


The crontabs should be located within directory /usr/spool/cron/crontabs or /var/spool/cron/crontabs. By performing the ls -l command on this directory, you will be able to list the contents. Each account with a crontab will have its own file in this directory. The contents of these files can be viewed with the more command. This will allow you to see the commands that are being executed and the schedule for that execution. Based on file permissions, you may need the administrator to display the contents of the crontabs. Also, depending on the level of your Unix knowledge, you may need the administrator's help in interpreting the contents of the files.

24 Review the security of the files referenced within crontab entries, particularly root's. Ensure that the entries refer to files that are owned by and writable only by the owner of the crontab. Also ensure that no crons are being run from open directories (permissions set to drwxrwxrwx).

All crons are run as if the owner of the crontab is running them, regardless of the owner of the file being executed. If someone besides the owner of the crontab can write to a file being executed by the crontab, it is possible for an unauthorized user to gain access to those accounts by altering the program being executed to cause the crontab owner to execute arbitrary commands (such as copying the cron owner's shell and making it SUID). For example, if root's crontab has an entry that executes the file /home/barry/flash, and that file is owned by "Barry," then "Barry" has the ability to add any command he wants to the flash file, causing "root" to execute that command the next time the cron is executed.

If a crontab is executing a file that is in an open directory, this would allow other users to delete the program being run and replace it with their own, again potentially resulting in the owner of the crontab executing arbitrary commands.


The contents of each user's crontab should be reviewed (see the preceding step for more information). The ls -l command should be performed on each file being executed in a crontab, and the ls -ld command should be executed for each of the directories containing those files.

25 Examine the system's scheduled atjobs for unusual or suspicious entries.

atjobs are one-time jobs that are scheduled to run some time in the future. They operate much like cron jobs (except that they are executed only once) and can be used to create time bombs.


The atjobs should be located within directory /usr/spool/cron/atjobs or /var /spool/cron/atjobs. By performing the ls -l command on this directory, you will be able to list the contents. The contents of these files can be viewed with the more command. This will allow you to see the commands that are being executed and the schedule for that execution. Based on file permissions, you may need the administrator to display the contents of the atjobs. Also, depending on the level of your Unix knowledge, you may need the administrator's help in interpreting the contents of the files.

Network Security and Controls

26. Determine what network services are enabled on the system, and validate their necessity with the system administrator. For necessary services, review and evaluate procedures for assessing vulnerabilities associated with those services and keeping them patched.

Whenever remote access is allowed (i.e., whenever a network service is enabled), it creates a new potential vector of attack, therefore increasing the risk of unauthorized entry into the system. Therefore, network services should be enabled only when there is a legitimate business need for them.

New security holes are discovered and communicated frequently to the Unix/Linux community (including potential attackers). If the system administrator is not aware of these alerts, and if he or she does not install security patches, well-known security holes could exist on the system, providing a vector for compromising the system.


This is one of the most critical steps you will perform. Unnecessary and unsecured network services are the number one vector of attack on *nix servers. They are what will allow someone who has no business being on the system to either gain access to the system or disrupt the system.


Use the netstat -an command, and look for lines containing LISTEN or LISTENING. These are the TCP and UDP ports on which the host is available for incoming connections. If LSOF is present on the system (more common on Linux), then lsof -i can be used.

Once you have obtained a list of enabled services, talk through the list with the system administrator to understand the need for each service. Many services are enabled by default and therefore were not enabled consciously by the system administrator. For any services that are not needed, encourage the administrator to disable them.

Understand the process used to keep abreast of new vulnerabilities for enabled services and to receive and apply patches for removing those vulnerabilities. Common sources for vulnerability announcements include vendor notifications and Computer Emergency Response Team (CERT) notices. CERT covers the high-profile vulnerabilities, but you really should be getting notifications from your operating system (OS) and addon software vendors to ensure adequate coverage. Information on this process can be gathered via interviews and review of documentation.

If you need to validate a specific patch or package version, you can view installed packages and patches via the following commands:

  • Solaris: showrev -p will list the patches that have been applied; these can be cross-referenced with the patches listed in the security advisory from Sun.

  • Linux: rpm -q -a (Red Hat or other distributions using RPM) or dpkg --list (Debian and related distributions) will show the versions of installed packages.

Note that software can be installed outside the package-management system provided by the vendor, in which case these commands won't show you the requisite information. If you need to find the version of an executable, try running the command with the -v switch. In most cases, this will show you version information that you can compare with information in vulnerability notices.

A network scan of existing vulnerabilities also can be used to help validate the effectiveness of the patching process. See the next step for further details.

Consider the configuration of the services, not just whether they are allowed. The proper configuration of certain services such as NFS, anonymous FTP, and those that allow trusted access and root login are discussed later in this chapter. Space restrictions prevent us from detailing the proper configuration of every potential service (plus new vulnerabilities are discovered all the time). This is why the use of a network scanning tool is a critical component of an effective audit. Such a tool will keep up with and test for the latest vulnerabilities for you.

27 Execute a network vulnerability-scanning tool in order to check for current vulnerabilities in the environment.

This will provide a snapshot of the current security level of the system (from a network services standpoint). The world of network vulnerabilities is an ever-changing one, and it is unrealistic to create a static audit program that will provide an up-to-date portrait of vulnerabilities that should be checked. Therefore, a scanning tool that is updated frequently is the most realistic mechanism for understanding the current security state of the machine. In addition, if the system administrator has a security-patching process in place, this scan will provide validation as to the effectiveness of that process (or as to whether it is really being executed).


See the "Tools and Technology" section later in this chapter for information on potential network vulnerability-scanning tools. Even though many of these tools are designed to be nondisruptive and to not require access to the system, you always should inform the appropriate IT personnel (e.g., the system administrator, the network team, and IT security) that you plan to run the tool, receive their approval, and schedule with them the execution of the tool. There is always a chance that the scanning tool will interact in an unexpected fashion with a port and cause a disruption, so it is important that others be aware of your activities. These tools almost always should be run in a "safe" (nondis-ruptive) mode such that they do not attempt to exploit any vulnerabilities discovered. There may be rare occasions where you will want to run an actual exploit to get more accurate results, but this should be done only with buy-in from and coordination with the system owner and administrator.

28 Review and evaluate the usage of trusted access via the /etc/hosts.equiv file and user .rhosts files. Ensure that trusted access is not used or, if deemed to be absolutely necessary, is restricted to the extent possible.

Trusted access provides the ability for users to access the system remotely without the use of a password. Specifically, the /etc/hosts.equiv file creates trust relationships with specific machines, whereas the .rhosts file creates trust relationships with specific users on specific machines.

For example, if system "Trusting" has an /etc/hosts.equiv file that lists machine "Trusted" as a trusted host, then any user that has an account with the same username on both systems will be able to access "Trusting" (the trusting machine) from "Trusted" (the trusted machine) without the use of a password. Thus, if the username "Hal" exists on both machines, the owner of the "Hal" account on "Trusted" will be able to access the "Hal" account on "Trusting" without using a password. Keep in mind that the key is the account name. If John Jones has an account on both machines, but one has the account name "jjones" and the other has the account name "jjonzz," then the trust relationship won't work. The operating system won't acknowledge them as the same account.

The .rhosts files work similarly except that they are specific to a user. Each user can have a .rhosts file in his or her home directory that provides trusted access to his or her account. If username "Barry" on system "Trusting" has a .rhosts file in his or her home directory and that .rhosts file lists system "Trusted," then the "Barry" account on "Trusted" will be able to access the "Barry" account on "Trusting" without using a password. Alternatively, system and username pairs can be listed in the .rhosts file. The .rhosts file for "Barry" on "Trusting" could list username "Wally" on system "Trusted." This would mean that the "Wally" account on "Trusted" would be able to access the "Barry" account on "Trusting" without using a password.

If the system you are auditing has trust relationships with other machines, then the security of the trusting system depends on the security of the trusted system. If the accounts that are trusted can be compromised, then, by definition, the accounts on the system you are auditing will be compromised. This is so because access to the trusted machine provides access to the trusting machine. It is best to avoid this sort of dependency if at all possible. The first option should be to eliminate trusted access. If it becomes obvious to the auditor that this is not feasible in the environment, the steps in the "How" section below can be used to mitigate the risk.


Trusted access works via the usage of the Berkeley "r" commands (e.g., rlogin, rsh, and rexec). These commands are designed to automatically look for trusted relationships via .rhosts and /etc/hosts.equiv files when executed. If a trusted relationship doesn't exist, these commands will require the entry of a password. If trusted relationships do exist, these commands will not require the entry of a password.


If NIS is used, it is also possible to grant trusted access to specific netgroups (groups of usernames).


Examine the contents of the /etc/hosts.equiv file and any .rhosts files on the system. The contents of the /etc/hosts.equiv file can be viewed by using the more /etc/hosts.equiv command. To find .rhosts files, you will need to view the contents of each user's home directory via the ls -l command (the location of user home directories can be found in the password file) in order to see whether a .rhosts file exists. The contents of any .rhosts files found can be viewed by using the more command. If file permissions restrict you from viewing the contents of these files, you will need to have the system administrator perform these commands for you.

Discuss the contents of these files with the system administrator to understand the business need for each entry. Encourage the administrator to delete any unnecessary entries or preferably to eliminate the use of trusted access altogether.

Ensure that none of the files contain the "+" sign. This symbol defines all the systems on the network as trusted and enables them all to logon without using a password (if there is an equivalent username on the trusting server). If the "+" sign exists in the /etc/hosts.equiv file, then any user (except "root") on any system on the network who has the same username as any of the accounts on the trusting system will be able to access the account without using a password. If the "+" sign exists in a .rhosts file, then any user on any system on the network who has the same username as the owner of the .rhosts file will be able to access the account without using a password. This includes the "root" account, so a .rhosts file with a "+" in root's home directory is usually a particularly bad idea.

For any legitimate and necessary trust relationships, determine how the administrator is comfortable that each system to which trusted access is given is as secure as the system being audited. As mentioned earlier, the system's security depends on the security of any system being trusted. System administrators generally should not give trusted access to systems they do not control. If they do, they should take steps to obtain assurance as to the security and integrity of the systems being trusted either by performing their own security scans or by conducting interviews with the system administrator.

If trusted hosts are needed in the /etc/hosts.equiv file, ensure that trusted users are not specified in this file. In some versions of Unix, a trusted user specified in this file will be allowed to log into the system as any username (except "root") without entering a password.

If trusted access is allowed, usernames in the password files must be consistent across each system involved in the trusted relationship. Determine whether this is the case. If system2 trusts system1, then username "Bob" on system1 can log in as username "Bob" on system2 without entering a password. If "Bob" on system1 is Bob Feller, while "Bob" on system2 is Bobby Thompson, then Bobby Thompson's account now has been compromised.

Ensure that the /etc/hosts.equiv and .rhosts files are secured properly (using the ls -l command). The /etc/hosts.equiv file should be owned by a system account (such as "root") and writable only by that account. If others can write to this file, they could list unauthorized machines in the trusted hosts list. The .rhosts files should be owned by the account in whose home directory they sit and should be writable only by that account. If a user can write to another user's .rhosts file, he or she could make himself or herself, or someone else, trusted to log into that user's account from another machine.

Ensure that entries use the fully qualified domain name for systems being trusted (e.g., "http//www.rangers.mlb.com" instead of just "rangers"). An entry that does not use the fully qualified domain name could be spoofed by a machine with the same host name but different domain.

29 If anonymous FTP is enabled and genuinely needed, ensure that it is locked down properly.

Anonymous File Transfer Protocol (FTP) allows any user on the network to get files from or send files to restricted directories. It does not require the use of a password, making it important that it be controlled properly.


To determine whether anonymous FTP is enabled, examine the contents of the password file(s). If there is an "ftp" account in the password file and the FTP service is enabled, then anonymous FTP is available on the system. Once an anonymous FTP user has logged in, he or she is restricted to only those files and directories within the "ftp" account's home directory, which is specified in ftp's password entry (we'll assume that the home directory is at /ftp for this step). The "ftp" account should be disabled in the password file and should not have a valid shell.

Ensure that the FTP directory (e.g., /ftp) is owned and writable only by "root" and not by "ftp." When a user uses anonymous FTP, he or she becomes user "ftp." If "ftp" owns its own files and directories, anyone using anonymous FTP could alter the file permissions of anything owned by ftp. This can be determined by performing the ls -l command on the "ftp" home directory. "Ftp" should only own the /ftp/pub directory.

Examine the permissions of the /ftp directory and the subdirectories (by using the ls -l command).

  • The /ftp/pub directory should have the sticky bit set so that people cannot delete files in the directory.

  • The /ftp directory and its other subdirectories should be set with permissions at least as restrictive as dr-xr-xr-x so that users can't delete and replace files within the directories.

Ensure that the /ftp/etc/passwd file contains no user entries (just "ftp") or passwords (by performing the more command on the file). Otherwise, anyone on the network can see usernames on the server and use those for attacking the system. It should not allow group or world write permissions (ls -l /ftp/etc/passwd).

Other files outside of the /ftp/pub directory should not allow group or world write access (verify by using the ls -l command).

Attackers could transfer large files to the /ftp directories and fill up the file system (in order to commit a denial-of-service attack and/or prevent audit logs from being written). The system administrator should consider placing a file quota on the "ftp" user or placing the /ftp home directory on a separate file system.

30 If NFS is enabled and genuinely needed, ensure that it is secured properly.

Network File System (NFS) allows different computers to share files over the network. Basically, it allows directories that are physically located on one system (the NFS server) to be mounted by another machine (the NFS client) as if they were part of the client's file structure. If the directories are not exported in a secure manner, it can expose the integrity and availability of that data to unnecessary risks.


NFS use can be verified by examining the /etc/exports file or the /etc/dfs/dfstab file (using the more command). If this file shows that file systems are being exported, then NFS is enabled.

Because NFS authorizes users based on UID, UIDs on all NFS clients must be consistent. If Clark's account is UID 111 on the system being audited, but Bruce's account is UID 111 on an NFS client, then Bruce will have Clark's access level for any files that are exported (because the operating system will consider them to be the same user). After determining which systems can mount critical directories from the system you're auditing, you will need to work with the system administrator to determine how UIDs are kept consistent on those systems. This may involve obtaining a copy of each system's password file and comparing UIDs that appear in both the NFS server and an NFS client. Note: The same risk exists and should be investigated for GIDs.

Review the /etc/exports file or the /etc/dfs/dfstab file (using the more command):

  • Ask the system administrator to explain the need for each file system to be exported.

  • Ensure that the access= option is used on each file system being exported. Otherwise, any machine on the network will be able to access the exported file system. This option should be used to specify the hosts or netgroups that are allowed to access the file system.

  • Ensure that read-only access is given where possible using the "ro" option (note that read/write is the default access given if read-only is not specified).

  • Ensure that root access is not being given to NFS clients (i.e., the root= option is not being used) unless absolutely necessary and unless the NFS clients have the same system administrator as the server. The root= option allows remote superuser access for specified hosts.

  • Ensure that root accounts logging in from NFS clients are not allowed root access. You should not see anon=0, which would allow all NFS clients superuser access.

Review the contents of the /etc/fstab or the /etc/vfstab (or /etc/check-list for HP systems) file (using the more command) to see if the system you are auditing is importing any files via NFS. If it is, ensure that the files are being imported "nosuid." If SUID files are allowed, the NFS client could import a file that is owned by "root" and has permissions set to rwsr-xr-x. Then, when a user on the NFS client runs this program, it will be run as that client's superuser. The root user on the NFS server could have inserted malicious commands into the program, such as a command that creates a .rhosts file in the client "root" user's home directory. This .rhosts file then could be used by the NFS server to obtain unauthorized superuser access to the NFS client. Note: If the system administrator is the same on both the NFS client and the NFS server, this is not a big risk.

On all these NFS steps, the auditor should use judgment. The criticality of the files being exported should influence the scrutiny with which the auditor reviews them.

31 Review for the use of secure protocols.

Certain protocols (e.g., telnet, ftp, rsh, rlogin, and rcp) transmit all information in clear text, including userID and password. This could allow someone to obtain this information by eavesdropping on the network.


Review the list of services that are enabled and determine whether telnet, ftp, and/or the "r" commands are enabled. If so, via interviews with the system administrator, determine the possibility of disabling them and replacing them with secure (encrypted) alternatives. telnet, rsh, and rlogin can be replaced by SSH. ftp can be replaced by SFTP or SCP. rcp can be replaced by SCP.


The use of secure protocols is particularly important in a DMZ and other high-risk environments. The auditor may determine that it is of less importance on the internal network. However, it is still advisable to use secure protocols even on internal networks in order to minimize attacks from within.

32 Review and evaluate the use of .netrc files.

.netrc files are used to automate logons. If a confidential password is placed in one of these files, the password may be exposed to other users on the system.


The following command can be used to find and print the contents of all .netrc files on the system. You likely will need to have the system administrator run this command to search the entire system.

find / -name ‘.netrc’ -print -exec more {} \;

For any .netrc files found, review the file contents. If read access is restricted, you will need the system administrator to do this for you. Look for indications of passwords being placed in these files. If so, review file permissions via the ls -l command, and ensure that no one besides the owner can "read" the file. Even if file permissions are locked down, anyone with superuser authority still will be able to read the file, so it's better to avoid using them at all. However, if they exist and are absolutely necessary, the auditor should ensure that they have been secured to the extent possible.

33 Ensure that a legal warning banner is displayed when connecting to the system.

A legal logon notice is a warning displayed whenever someone attempts to connect to a system. This warning should be displayed prior to actual login and basically should say, "you're not allowed to use this system unless you've been authorized to do so." Verbiage of this sort may be needed to prosecute attackers in court. Unfortunately, court rulings have dictated that you have to specifically tell someone not to hack your system or you can't prosecute them for doing so.


Login to your account using each available mechanism that provides shell access, such as telnet and SSH. Determine whether a warning banner is displayed. The text for this banner frequently is located in files such as /etc/issue and /etc/sshd_config (or /etc/openssh/sshd_config). Via interviews with the system administrator, determine whether the verbiage for this warning banner has been developed in conjunction with the company's legal department.

34 Review and evaluate the use of modems on the server.

Modems bypass corporate perimeter security (e.g., firewalls) and allow direct access to the machine from outside the network. They present significant risk to the security of the machine on which they reside and also may allow the modem user to "break out" of the machine being audited and access the rest of the network. Allowing dial-in modems to be placed on a production machine is usually a bad idea. It is almost always preferable to have access to a machine channeled through standard corporate external access mechanisms such as VPN or RAS.


Unfortunately, there is no reliable method of determining whether a modem is connected to a machine outside of physical inspection. If physical inspection is not practical, the next-best option will be to interview the system administrator to understand whether modems are used. As mentioned earlier, if they are used, alternate mechanisms for allowing external access to the machine should be investigated. If it is determined that a dial-in modem is truly necessary, consider implementing compensating controls such as dial-back to trusted numbers (i.e., when a call is received, the machine hangs up and dials back to a trusted number) and authentication.

Audit Logs

35. Review controls for preventing direct "root" logins.

Because a number of people usually know the "root" password, if people are allowed to login directly as that account, there is no accountability for actions performed by that account. If inappropriate actions are performed by the "root" account, there will be no way to trace those actions back to a specific user. It is preferable to force people to login as themselves first and then use su or sudo to access the "root" account.


Review the wtmp log (by performing the more command on /usr/adm/wtmp, /var/ adm/wtmp, or /etc/wtmp depending on the type of system) to verify that there are no direct "root" logins. The last command can be used to view the contents of this file on most systems. Exceptions would be direct logins from the console, which may be needed for emergencies.

Review settings for preventing direct "root" logins via telnet and rlogin.

  • The file /etc/default/login can be used to disable direct root logins on Solaris machines. If this file is available, the CONSOLE= parameter should be set to the pathname of a nonexistent device. If the administrator wishes to place the pathname of the actual console device (the terminal directly linked to the Unix machine) into this parameter, the console should be in a secure location. The contents of this file can be viewed by executing the more /etc/default /login command.

  • On Linux and HP systems, the /etc/securetty file can be used to prevent direct logins as "root." The contents of the file should contain all terminals that are allowed direct "root" login. The file should exist but be empty. Sometimes the system administrator will want to allow direct "root" login from the console terminal. This is acceptable, as long as the console is in a secure location. The contents of this file can be viewed by executing more /etc/securetty.

Review settings for preventing direct "root" logins via SSH. The /etc/sshd_config or /etc/openssh/sshd_config file is used for this purpose. Review the contents of this file using the more command. Look for the PermitRootLogin parameter. If this parameter is set to a value of no, "root" logins are not permitted. If the parameter is not there or is set to a value of yes, "root" logins are permitted.

Review settings for preventing direct "root" logins via FTP. This can be done by placing a "root" entry in the /etc/ftpusers file. Review the contents of this file using the more command.

36 Review the su and sudo command logs to ensure that when these commands are used, they are logged with the date, time, and user who typed the command.

The su command is a tool used frequently by attackers to try to break into a user's account. The sudo command allows authorized users to perform commands as if they were "root." The use of both commands should be logged in order to ensure accountability and to aid in investigations.


Attempt to perform a more command on the su log. However, the log may be protected, so you may not be able to do this. If this is the case, have the system administrator provide you with a copy. For some systems, the su log will be at /usr/adm/sulog, /var/adm/ sulog, or /var/log/auth.log. For other systems, the /etc/default/su file will determine where the su log will be kept.

  • Ensure that this file exists and is capturing information on su usage (e.g., who performed the command, what account they switched to, the date and time of the command, and indications as to whether or not the command succeeded).

  • Also question any instance of one user su'ing to another user's account. There should be little to no reason for one user to attempt to su to another user's account on the system. Most su's should be from an administrator's account to "root" or from a user account to an application ID.

View the sudo log to ensure that it is capturing information on sudo usage (e.g., who performed the command, what command was performed, and the date and time of the command). By default, the sudo logs are written to the syslog, but this can be changed in /etc/sudoers, so check for the location on your system (using the more command).

37 Evaluate the syslog in order to ensure that adequate information is being captured.

If system audit logs are not kept, there will be no record of system problems or user activity and no way to track and investigate inappropriate activities.


View the contents of the /etc/syslog.conf file using the more command. The /etc/syslog.conf file determines where each message type is routed (to a file name, to a console, and/or to a user). At a minimum, crit and err messages related to auth (authorization systems-programs that ask for usernames and passwords), daemon (system daemons), and cron (cron daemon) probably should be captured, along with emerg and alert messages.

Each syslog message contains, in addition to the program name generating the message and the message text, the facility and priority of the message.

Following are some of the common potential syslog facilities (i.e., the type of system function):

  • kern: kernel

  • user: normal user processes

  • mail: mail system

  • lpr: line printer system

  • auth: authorization systems (programs that ask for usernames and passwords)

  • daemon: system daemons

  • cron: cron daemon

Following are the potential priority levels, which indicate the severity of the message:

  • emerg-emergency condition (e.g., imminent system crash)

  • alert-immediate action needed

  • crit-critical error

  • err-normal error

  • warning-warning

  • notice-not an error but special handling needed

  • info-informational message

  • debug-used when debugging programs

Note that these are listed in descending order (most critical to least critical). When specifying a logging level, it encompasses that level and higher, so logging at the debug (lowest) level, for example, also would log all other levels. An asterisk for the facility or level indicates that all facilities or levels are logged.

On HP systems, the /etc/btmp file contains invalid login attempts. Determine whether this file exists. If not, it should be created. On Solaris, the file /var/adm/loginlog will log anytime a user tries to log into the system but types a bad password five times in a row (by default-the number can be configured in the /etc/default/login file). If this file does not exist, it should be created.

38 Evaluate the security and retention of the wtmp log, sulog, syslog, and any other relevant audit logs.

If the audit logs are not secure, then unauthorized users could change their contents, thus damaging their usefulness during investigations. If they are not retained for an adequate period of time, then the administrator may be unable to investigate inappropriate activities and other system issues if needed.


The locations of the log files are discussed in previous steps in this section. Perform a ls -l command on those files. They usually should be writable only by "root" or some other system account.

Interview the system administrator to determine retention, which could be either online or offline. It is generally preferable to retain these security logs for at least 3 to 6 months to allow for adequate history during investigations.

39 Evaluate security over the utmp file.

The utmp log keeps track of who is currently logged into the system and includes information regarding what terminals those users are logged in from. By changing the terminal name in this file to that of a sensitive file, an attacker can get system programs that write to user terminals to overwrite the target file. This would cause this sensitive file to be corrupted.


Perform an ls -l command on the utmp file, which is usually located at /etc/utmp on Unix systems and at /var/run/utmp on Linux systems. The file should be owned by "root" or another system account and should allow only owner write.

Security Monitoring and Other Controls

40. Review and evaluate system administrator procedures for monitoring the state of security on the system.

If the system administrator does not have processes for performing security monitoring, security holes could exist, and security incidents could occur without his or her knowledge.


Interview the system administrator, and review any relevant documentation to get an understanding of security monitoring practices. Numerous levels and methods of security monitoring can be performed. They all do not need to be performed, but you would like to see some level of monitoring. The level of monitoring required should be contingent on the criticality of the system and the inherent risk of the environment (e.g., a web server in the DMZ should have more robust security monitoring than a print server on the internal network). Basically, you want to know how the system administrator is monitoring for problems such as what you've been auditing for throughout the other audit steps in this chapter.

Listed below are four primary levels of monitoring. Potential tools for performing these types of monitoring are discussed in the "Tools and Technology" section later in this chapter:

  • Network vulnerability scanning. This is probably the most important type of security monitoring in most environments. This is monitoring for potential vulnerabilities that could allow someone who has no business being on the system to either gain access to the system or disrupt the system. Since these vulnerabilities can be exploited by anyone on the network, it is important to be aware of them and close them down.

  • Host-based vulnerability scanning. This is scanning for vulnerabilities that would allow someone who's already on the system to escalate their privileges (e.g., exploit the "root" account), obtain inappropriate access to sensitive data (e.g., owing to poorly set file permissions), or disrupt the system. This type of scanning generally is more important on systems where there are many nonadministrative end users.

  • Intrusion detection. This is monitoring in order to detect unauthorized entry (or attempts at unauthorized entry) into the system. Baseline monitoring tools (e.g., Tripwire) can be used to detect changes to critical files, and log-monitoring tools can be used to detect suspicious activities via the system logs.

  • Intrusion prevention. This is a type of monitoring that detects an attempted attack and stops the attack before it compromises the system. Examples include host Intrusion Prevention System (IPS) tools and network-based IPS tools such as Tipping Point.

If security monitoring is being performed, assess the frequency of the monitoring and the quality with which it is performed. Look for evidence that the security monitoring tools actually are used and acted on. Review recent results, and determine whether they were investigated and acted on. Leverage the results of the rest of the audit in performing this assessment. For example, if you found significant issues in an area that they are supposedly monitoring, it might lead to questions as to the effectiveness of that monitoring.

41 If you are auditing a larger Unix/Linux environment (as opposed to one or two isolated systems), determine whether there is a standard build for new systems and whether that baseline has adequate security settings. Consider auditing a system freshly created from the baseline.

One of the best ways to propagate security throughout an environment is to ensure that new systems are built right. In this way, as new systems are deployed, you have confidence that they initially have the appropriate level of security.


Through interviews with the system administrator, determine the methodology used for building and deploying new systems. If a standard build is used, audit a newly created system using the steps in this chapter.

42 Perform steps from Chapter 4 as they pertain to the system you are auditing.

In addition to auditing the logical security of the system, it is important to ensure that appropriate physical controls and operations are in place to provide for system protection and availability.


Reference the steps from Chapter 4, and perform those that are relevant to the system being audited. For example, the following topics are likely to be pertinent:

  • Physical security

  • Environmental controls

  • Capacity planning

  • Change management

  • System monitoring

  • Backup processes

  • Disaster recovery planning

IT Auditing. Using Controls to Protect Information Assets
It Auditing: Using Controls to Protect Information Assets [IT AUDITING -OS N/D]
Year: 2004
Pages: 159

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net