Search Logfiles


There are dozens of different tools that can search logfiles and locate activities of interest. This section will help you search logfiles. First, we will explain the overall strategy to adopt when searching logfiles. Next , we will look at examples of how to check a logfile manually using the grep command. Then we will look at logwatch , a tool that is available as part of Red Hat Enterprise Linux AS 3.0, and logsurfer , a tool found in SUSE SLES8. Last, we will look at tools you can download and install, such as swatch .

There are several web sites that provide useful information to help you. One great resource that has links to useful programs, settings, log samples, and configuration samples is http://loganalysis.org/. A second site to check out is http://www.iss.net/. They have useful downloads and white papers.

Strategy for Searching Logfiles

The challenge inherent in logfile analysis is to sort the exception activity from normal activity. This presupposes that you know what regular activity on your system and network looks like. Without experience to draw upon it is difficult to know how regular occurrences are represented in the logfiles. It is likely that it will take some time before normal logfile activity is familiar. Clearly, this cannot be done at one sitting, and it needs to be part of a process that is developed over time. Also, as applications and users are added or changed on the network, it is likely the logs will change too.

The next step after isolating an exception circumstance is to correctly identify when the exception constitutes an alarm condition. This can only be determined through understanding your business requirements. There needs to be a balance between system availability and risk. If it is imperative that the system is protected because it contains crucial company information like credit card numbers , then there will be a decrease in availability. This is because sometimes you may need to take the system off the network to protect it. There are circumstances where activity in the logs is so important that the machine should be shut down or immediately removed from the network.

If you need to have a server up as much as possible, you will need to be more tolerant of exceptions while keeping a watchful eye. If administrative staff carry pagers , when some activity requires immediate corrective action they can be interrupted with a page. An example of activity that warrants immediate concern includes evidence of port spoofing where the logged src port does not match the FQDN. If staff cannot respond immediately the best course of action might be to shut the machine down immediately with the shutdown command:

 shutdown -h -t now 0 

You can shut down a remote machine with an expect script that logs into the affected box and executes a shutdown command. An expect script that executes a remote shutdown looks something like this code listing:

 #!/usr/bin/expect -f # Perform a remote shutdown set inet_host "192.168.1.10" spawn ssh "$inet_host" # give it a second to connect sleep 1 expect "ssword:" send "password_of_uid_searching_logs\n" sleep 2 send "su -\n" expect "ssword:" send "remote_machines_root_password\n" sleep 1 send " shutdown -h -t now 0\n" sleep 10 

The drawback in this type of remote shutdown is that the root password is in the script, which is less than desirable. Also, if the root password is changed, the script needs to be edited.

Searching Logfiles Manually

grep is one of the most powerful Unix shell commands. An excellent use of this text file search command is to use grep to search for patterns inside logfiles. Using grep is easy ”on the command line, type

 grep "  what to look for  "  file to look in  

For example:

 grep "failed" /var/log/messages. 

When you run this command you receive a list of every line in the file with the word failed in it. By default the grep command is case sensitive, so depending on the context you may want to use grep with the -i flag to make searches case insensitive. One challenging aspect of searching logfiles is that you need to know what you are looking for before you can find it. There are a couple of ways to approach this dilemma. If there is an activity you know you want to catch, such as users trying to su to root, you could simply perform the activity and look for it in the logs. For instance, an unsuccessful su would look something like this in SUSE:

 Apr  1 11:15:54 chim su: FAILED SU (to root) rreck on /dev/pts/1 

Therefore, to find all such activities you would type

 grep "FAILED SU" /var/log/messages 

Similarly if you wanted to find failed remote access attempts, you might try logging in and failing to get the password right. The line for failing to log in to SSH would look something like:

 Apr  1 11:24:17 chim sshd[1934]: Failed password for rreck from ::ffff:192.168.1.99 port 32942 ssh2 

Then, to find a similar case you could type

 grep "Failed password" /var/log/messages 

These are both reasonable activities to be concerned with as they are indicative of hacking. If grep shows only a couple of instances of failed attempts, it s probably because someone forgot their password or mistyped . On the other hand, if grep highlights dozens of failed access attempts, someone is probably trying to break in and you should take steps to deny them access at the network level.

Search Logfiles with logwatch

One commonly used tool, named logwatch , was written by Kirk Bauer using the Perl programming language. logwatch has been part of the standard Red Hat distribution for quite a while. logwatch s configuration file can be found at /etc/log.d/conf/logwatch.conf. There are several other directories used by logwatch under /etc/log.d/. You probably won t need to change anything since it runs well without configuration modification, but it might be helpful to know where to look for files if you do.

The configurable options include the level of detail to include, whether to e-mail the results or print them to the screen, and a limited date range choice of Today, Yesterday , or All.

If you are running Red Hat Enterprise Linux AS 3.0, logwatch 4.3.2 is scheduled to run daily by default, because of a softlink in /etc/cron.daily/logwatch. The default configuration checks yesterday s system logs in /var/log/ and mails the results to the root user . You can see a sample of the report by typing

 /etc/cron.daily/00-logwatch -print -range all 

The report starts with a header that shows the settings that logwatch ran with. Next, there are sections for PAM activity, connections, SSH, and disk space. One nice thing about the report format is that it condenses and tallies the results for the day s activity.

Search Logfiles with logsurfer

The program logsurfer comes with SUSE SLES 8.1 and 9.0. logsurfer was written to allow more precise decisions than other log searching programs like swatch , which we will look at in the next section. Much like other log searching programs, logsurfer compares each line in a logfile against regular expressions and if there is a match it performs an action. The actions are expressed as rules. logsurfer goes further than swatch in a few ways. First, logsurfer matches lines using two regular expressions; the logfile s line needs to match the first expression, but must not match the optional second expression. This can be useful because it allows you to express exceptions. Another huge strength of logsurfer is that it works on contexts instead of single lines. This is handy because a single line in a logfile does not always contain enough information to make a decision.

On the downside, logsurfer can be hard to configure since you need to really understand regular expressions, and there are not many configuration examples included.

Configuring logsurfer

The best place to find information about logsurfer is from the man page. The man pages that you see when you type man logsurfer or man logsurfer.conf are not the only man pages. A more detailed man page is available when you type man 4 logsurfer.conf . This tells the man command that you want the man page from section 4.

The details in Table 12-3 can be found in the man pages. Each line in the logsurfer configuration file is one of three things: if the line starts with # it s a comment, if it starts with white space it is a continuation of a previous rule, or otherwise it s the start of a new rule. Each new rule has six mandatory fields and one optional field. Table 12-3 explains the functions of each of the seven fields.

Table 12-3: logsurfer Fields

Field Number

Optional

Field Name

Field Explanation

1

Required

match_regex

If the line matches this regular expression, the rest of the rule is parsed, otherwise logsurfer continues to find rules to match the line against, and the rest of the fields are skipped .

2

Required or -

not_match_regex

As long as there is something besides a hyphen (-), it is treated as a regex to contrast with the first regex. The line says match the first regex but not the second one.

3

Required or -

stop_regex

As long as there is something besides a hyphen (-), this regex is matched, then the rule is deleted from the list of active rules.

4

Required

or -

not_stop_regex

As long as there is something besides a hyphen (-), it is treated as a regex not to match against. Only delete this rule if there is a stop_regex and the first stop_regex matches and this regex does not.

5

Required or 0

timeout

This allows you to specify a number of seconds that a rule is good for. Set this to 0 to never time out.

6

Optional

Continue

If the word continue occurs, logsurfer continues to try to match rules against the line of the logfile. If there is no continue, the rest of the rules are skipped.

7

Required

Action

One of the following actions: ignore , exec , pipe , open , delete , report , rule .

The last field of any logsurfer rule is the action field. The actions are useful in opening or deleting contexts and creating new rules. Table 12-4 lists the possible actions and what they do.

Table 12-4: logsurfer Actions

Action

Purpose

Ignore

This action means to do nothing and ignore the line. This is useful when you know the line you are working with is not important.

exec

The argument following this action is the program to execute.

pipe

This is similar to exec except that the invoked program gets the actual logline from stdin .

open

This means to open a new context, unless a context already exists for the match_regex .

Delete

If an existing match_regex is used as an argument, the specified context is closed and deleted without applying the default_action .

report

The first argument specifies the external program (including options) that should be invoked. All further arguments specify context definitions that are summarized and fed as standard input to the invoked program.

Rule

This allows the creation of new rules. Following the keyword rule must be an indication of what order the rule is to be applied: before , behind , top , or bottom .

logsurfer does not need to run as the root user and a user could be made specifically to run logsurfer . If the logs logsurfer needs to read are restricted, it is recommended that a group be created for system administration and that logsurfer is added to that group. For this to work, the logs themselves will need to permit group level read access. Use the chmod command:

 chmod +gr  logfile  

Search Logfiles with swatch

swatch is not part of the standard distribution for Red Hat Enterprise Linux AS 3.0 or SUSE SLES. Since swatch is easy to install, configure, and work with, you should consider downloading and installing it. On the command line, type

 mkdir swatch-install; cd swatch-install 

swatch requires the addition of some Perl libraries. The ones you need depend on how many of the operating system s packages you have installed. You might need

 wget http://www.cpan.org/authors/id/J/JH/JHI/Time-HiRes-1.59.tar.gz 

You will need

 wget http://www.cpan.org/authors/id/M/MG/MGRABNAR/File-Tail-0.98.tar.gz wget http://www.cpan.org/authors/id/S/ST/STBEY/Date-Calc-5.3.tar.gz wget http://www.cpan.org/authors/id/S/ST/STBEY/Bit-Vector-6.3.tar.gz wget http://unc.dl.sourceforge.net/sourceforge/swatch/swatch-3.1.tar.gz 

Installing swatch in Red Hat Enterprise Linux AS 3.0 only requires one more library than in SUSE SLES. Install this as the fourth step, before installing swatch itself:

 wget http://www.cpan.org/authors/id/G/GB/GBARR/TimeDate-1.16.tar.gz 

Then for each of the files do these steps, in the same order you got them:

 tar -xzf filename.tar.gz cd filename perl Makefile.PL make make test 

Check for errors, but there shouldn t be any.

 make install 

swatch should now be installed, but to start working with it you need a configuration file. By default swatch looks for the file .swatchrc in the home directory of the user who invoked it, for instance /home/rreck/.swatchrc. You can specify a different configuration file by using the -f flag:

 swatch -f /etc/.swatchrc 

Let s look at how some hacker activities could be found by swatch and what to do when they are found.

Modify swatch Configuration to Detect an Apache Exploit

Let s see an example of a real-life exploit, the logged information, and how to properly react and change system configurations. In the past, Apache s HTTP server has been the target of an exploit because of problems with mod_userdir (a default Apache module) and its default configuration. Hackers use a tool that scans remote hosts to find out disclosure information about user accounts. Their tool first checks for Apache and that the Apache version is a vulnerable one. Next, the exploit cycles through checking your system for many well-known user accounts. It is very fast and can check quicker than one account per second depending on the speed of your network connection and server. This user information is then used to target other system services, such as FTP, because the hackers know that the user account exists.

During the attack, there is a flurry of activity in Apache s access log. There will be a lot of messages indicating 404 errors when the hacker does not find a directory for the user accounts he is probing and 403 errors because the access to the existing directory is forbidden. These conditions are also returned to the attacker. Herein lies the exposure. These 403 errors are the ones the exploit is looking for, and the users accounts that generated those error codes are likely to be targeted for further attack.

These few lines added to the .swatchrc file used on your Apache logs will let you know by e-mail which accounts are likely to be targeted:

 # Apache 403 errors  watchfor   / 403 /          echo          bell        mail 

One error does not mean trouble is brewing, but if there are several 403 errors in the span of a few seconds you know to take action. Since error 403 is not a common error, you could search on a regular basis, such as every night at five minutes past midnight, by adding a line like this to root s crontab:

 5 0 * * *  --config-file=swatchrc.apache swatch --examine-file=/var/log/httpd/access_log 

The best security plan would be to run swatch as a daemon or in tail mode so you can find out immediately after the attempt happens, with a line like this:

 swatch --config-file=swatchrc.apache --tail-file=/var/log/httpd/access_log 

Once you realize this is happening, you should immediately try to remedy the situation by adding and altering a few configuration directives in Apache s configuration file /etc/httpd/httpd.conf (SUSE) and restart the daemon /etc/rc.d/apache restart (SUSE). One possible remedy is to disable the UserDir directive globally and then enable it on a user-by-user basis by adding these lines:

 UserDir disabled  UserDir enabled user1 user2 user3 

Alternatively, you might allow UserDir globally, but disable it for accounts that you don t want the world to know about, with lines like these:

 UserDir enabled  UserDir disabled user4 user5 user6 

You need to stay aware and understand what the log activity indicates. There is no perfect recipe for security other than to remain diligent. This is merely an example of how monitoring logs can reveal information that you need to react to by making configuration changes. Now let s look at the activity that would be logged by another exploit.

Modify swatch Configuration to Detect an Attack on the SSH Daemon

Now let s look at an example involving the SSH daemon ( sshd ). Even when your services are tightened and minimized because you are running only the essentials, it is important to scan logs to keep an eye on things. Logging is only useful when something is done with the information. Someone could easily try to brute force their way into shell access by ssh -ing to your server and trying thousands of passwords against a known user s account. The downside for them would be that their attempts would look like this in the SUSE system s logs:

 Apr  1 15:17:52 linux sshd[3950]: Failed password for rreck from ::ffff:192.168.1.99 port 33113 ssh2 Apr  1 15:17:53 linux last message repeated 2 times 

The same activity would look something like this in the Red Hat system s logs:

 Jun  6 13:01:56 chim sshd(pam_unix)[17331]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=192.168.1.199  user=rreck Jun  6 13:02:01 chim sshd[17331]: Failed password for rreck from 192.168.1.199 port 33181 ssh2 Jun  6 13:02:03 chim sshd[17331]: Failed password for rreck from 192.168.1.199 port 33181 ssh2 Jun  6 13:02:03 chim sshd(pam_unix)[17331]: 2 more authentication failures; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=192.168.1.199   user=rreck 

The security strategy is to notice this is happening as soon as possible. Add a swatch configuration snippet to the system swatchrc that looks like this:

 # Failed password watchfor   / Failed password for /         echo         bell       mail 

Then when swatch runs, whether from cron or from a tail, you will get an e-mail alert. This e-mail notification is a call to action. Depending on your requirements you might need to let rreck log in from a multitude of hosts, but want to thwart this hacker s attempts. Edit /etc/ssh/sshd_config to explicitly deny access for rreck s user account from the site that has multiple failed access attempts, and at the same time permit rreck to log in from elsewhere.

 # Prevent access for hacker even if they guess the password ! DenyUsers rreck@192.168.1.99 AllowUsers rreck@* 

Then send sshd a SIGHUP signal or restart it with

 /etc/rc.d/sshd restart 

You know that the changes are working when you see messages in /var/log/messages from sshd indicating that the hacker s attempts are now being thwarted by sshd :

 Apr  1 17:42:55 linux sshd[5864]: User rreck not allowed because listed in DenyUsers 

This log message means that even if the correct password is guessed, the user will not gain access. From the hacker s side there is nothing to tell them you have changed the configs and that their efforts are futile.

The point is to understand that security is never done; it s an ongoing effort. The best way to mitigate risks is to be diligent and stay aware of what is going on. Pay attention to the logs and aberrations from normal activity.

Respond to Attacks and Abnormalities

In the end, it is important to realize the best security posture involves process. It is helpful to consider all your options before something happens so that you don t waste any time before taking action. It also helps to think things out when emotion isn t involved, and you can be sure you are reacting the best way possible.

You can look at it like a decision tree ”as you discover activities of interest you should ask yourself some questions. Is this a critical issue or maybe something lesser like your ISP doing a security audit, or just someone snooping around? If it is critical, is there some sort of attack going on or is something misconfigured somewhere? The best decision might be to shut the server down immediately or at least remove it from the network.

If you have determined that an attack is occurring, should you notify an administrator at the source end of the attack? Should you notify law enforcement? What service is being attacked ? Is this a service your business requires? Has your system s security been penetrated? If so, what kind of damage was done? If not, what part of your defensive barrier has proven successful? Preparing yourself for these decisions in advance will allow you to act quickly when it matters most. When things are going wrong in the middle of the night, you will be reluctant to wake people up unless you have decided in advance that you need to. Proper planning and solid documentation through logfiles are the best strategy to hardening your system s auditing capabilities.




Hardening Linux
Hardening Linux
ISBN: 0072254971
EAN: 2147483647
Year: 2004
Pages: 113

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net