Ongoing Investigation: Implementing Forensic Tracers

How do you prevent an intrusion from happening in the future, besides just patching your system or keeping your network better defended? As you can see, you can spend a lot of time and energy investigating and recovering data in forensic analysis. The saying to keep in mind is "an ounce of prevention is worth a pound of cure." Forensic analysis doesn't have to be just analysisit can be prevention as well. There are two main components to forensic tracers:

  • File Integrity Create a file integrity database with cryptographic hashes of each file on your system in order to detect unintended modifications.

  • Intrusion Detection and Notification Monitoring intrusions and notifying the proper personel when intrusions are detected is key to effective and timely recovery.

File Integrity Solution

There are many different types of file integrity solutions out there, some commerical, some open source, some that began as opensource but are now commercial, and vice versa. Some are designed for integrity of source code, and some are better at building and maintaining multiple systems all at once. We have used several and seen many, and it is difficult to recommend just one as a general solution.

One of the easiest to use is md5mon, which provides a reasonable trade-off between data collection and updating (updating the hashsum database) and searching or comparing the databases on multiple systems. It isn't the most complicated or advanced tool out there, and helps only if you configure it properly. This is only one of the many available, and you can use the one that you feel the most comfortable withwhat is important is that you implement one.

md5mon is a simple hashsum database and verification toolkit, written, as the name implies, using MD5 for hash computation and comparison. It can also be configured to use SHA (via shasum) instead of MD5 if that is preferable. Many cryptologic research organizations believe there is a weakness in MD5 and the algorithm is less resilient to collisions (where two inputs produce the same output).

md5mon consists of the md5sum program and several files that it uses to configure itself and store the hash databases. The distribution is simple, easy to install, easy to set up in cron jobs, and easy to keep updated. Let's look at the configuration.

First, unpack the distribution into any directory of your choosing. Some people choose to hide the installation somewhere so it is less easily foundfor example, in /usr/X11R6/ app-defaults, or something like that. We will just install in /usr/local/md5mon for now.

The way md5mon works is through a series of group levels, where a group level is nothing more than a grouping of files. Lower levels are considered more important; higher levels, less important. Within the distribution directory there are two files that are an important part of configuring the the group levels:

  • dirs_< level > Contains a list of directories and files that should be monitored . The name of the file is named <level> where level is the level number, for example 0, 1, 5, etc.

  • exclude_ <level> Contains a list of files/directories that should not be included in the search, and no checksums should be calculated. This is an exclusion list and is useful for temporary or other random files that are not important (and change frequently).

Edit these two files to reflect the directories that you want included and excluded from the database. Then, you need to update md5mon to tell it you have modified these files. It maintains hashsums on these files as well, so that if the package or distribution is modified, you will be notified of this. Once that is complete, you need to generate the hash database (update it). The command listing below illustrates this process:

 host:/usr/local/md5mon# ./md5mon -a host:/usr/local/md5mon# ./md5mon -u 0 1 

At this point, you now have updated hash databases of all the files and directories you have configured for your two levels, or whichever ones you have configured. To check the integrity of the files, use the command md5mon -c 0 1. You can also use the -q or -quiet options, to produce output only if there are differencesthis is useful in the automated cron entries, as you will see.

Before we get to those details, you may ask what happens if someone just modifies the configuration or modifies the checksum database? There is a solution to this problem as well through a special file called sums.md5mon. This file contains a checksum of all the other files, including the checksum database files themselves . It is best to store this file on removable media, for example, a USB hard disk drive, like one of the popular "Thumb drives ," or somewhere off the system. The trustedsource script is used to control retrieval of the integrity check data. It may exist almost anywhere and the script is fairly flexible. It can be stored on a floppy drive, on a web server, or otherwise pretty much anything that can be scripted is possible. See this file, and edit as appropriate for your installation. Our choice is to store the information for all of our systems on a shared internal web server, and download each time before the verification checks. In extremely high-security implementations , the hash databases are normally stored on read-only media. However, with today's average software lifecycle (and patching frequency), the increased regularity of updates makes this method cumbersome.

Next let's also look at the packaging command, which will help you copy all of the checksum information, as well as the commands and everything you need for verification if there is data loss or corruption, or if you need to perform side-by-side comparision in a forensic analysis situation on another system. To perform this packaging operation, use the -p or -package command to create a package of all the data. From your cron scripts, you can then upload or copy to another system or secure location; however, we prefer to make this a manual process to avoid accidental upload of incorrect data.

Now, let's automate the file integrity check and packaging processes. This is really not as hard as it may seem. Assuming you have properly configured your cron entries and mail server to deliver reports from cron jobs to you or administrators, any checks that are run will be e-mailed directly to you. The distribution comes with a sample cron file, which you may be able to just copy to /etc/cron.daily or something like that depending on the timed execution convention used in your operating system distribution. The exact location and details are operating system and distribution-specific, but most modern crons are set up to have either a file called /etc/crontab, or a directory with multiple files, one for each cron job and stored in /etc/cron.hourly, /etc/cron.daily, or /etc/cron.monthly, etc. Assuming the latter, create a file in /etc/cron.daily called md5mon. Now make sure this file is owned and executable by root and root alone.

 # chown root:bin /etc/cron.daily/md5mon # chmod 0700 /etc/cron.daily/md5mon 

Now edit this file and place the following lines in the file:

 #!/bin/sh BASEDIR=/usr/local/md5mon # Run basic file integrity monitor (md5mon) on levels 0 and 1, with quiet option ${BASEDIR}/trustedsource #overwrite files just in case ${BASEDIR}/md5mon -q -c 0 1 

This simply runs the trustedsource update script, which may download the configuration from a trusted external source, and then runs a check on levels 0 and 1. Any differences are reported via cron's standard reporting mechanism, which is usually to e-mail the output of the commands to the owner, in this case, root. This is pretty much foolproof. Either you get the report or you don't. If you don't, go and investigate the system. If someone tried to modify the configuration or the hashsum database file directly, the internal checks wouldn't match and this would be reported . The intruder would have to modify both the md5mon configuration and the external safe version, assuming he knows where it is.

Of course, any time the files do legitimately change, which can happen more often than you may realize, you need to update the database using the update command, and then update the safe stored version to a secure location. To prevent an intruder from being able to do that, our recommendation is for that process to remain a manual one.

Intrusion Detection and Notification

For a thorough explanation of intrusion detection systems and technology, see Chapter 7. This section is about knowing when your system has been compromised (and hopefully getting that information quickly to people who can do something about it).

It may seem rather simple, but one of the best ways of detecting intrusions is to monitor log information and status reports that are automatically generated about the health of the system. As mentioned in the previous section, a file integrity database and automated reports of inconsistencies may be the first line of defense, but there are a number of others specifically , the processes that run on your system and the information they generate in terms of log output.

First, are all the customary applications and services still running? How do you know? Do you monitor them? If one fails, do you look at the log output it may have generated to determine why? If you are like the rest of us, there are always a thousand other things to do, and monitoring log files and processes is not at the top of the list. Here is where automated log monitoring software is extremely useful.

Automated log monitoring software consists mostly of just scripts and tools to automatically check for critical or interesting output in various log files. These logs might be from the operating system itself, maybe from your web server, ftp servers, or just general system accountingwho is logging into the system and what commands are they running?

Logwatch, written by Kirk Bauer, is an enormously popular application for UNIX systems, for good reason. It is an easy-to-use and flexible package, which runs automatically via cron to monitor log files and create a report of interesting output. By default, most system adminstators set up logwatch to report on log output once per day. This is sufficient for many systems. You may need more frequent output, however, depending upon the criticality of the information and the needs of your business. Logwatch itself is configured with a simple text configuration file, usually stored in /etc/logwatch/conf/logwatch.conf, which contains details about the log files you want to have monitored and what type of output should be reported on. As logwatch is relatively flexible, you can separate out the information if you wish and report only on certain logs. These details can be specified by using additional separate files, called filters, which control what output is reported. These scripts are usually placed in /etc/log.d/scripts.

With multiple entries and different filters, you can have logwatch execute more frequently, looking for more critical information, and have other, more general information reported less frequentlyit is all in how you configure it. Kirk Bauer, the author of the logwatch program, has recently written his own book, which is yet to be released, but should be available by the time you read this, entitled Automating UNIX and Linux Administration (Apress, 2003), and we recommend this book for further in-depth discussions on how to configure logwatch to extract the most out of it. Keep in mind, for larger environments a centralized logging repository is highly recommended for both security and correlation reasons. There are many commercial and noncommercial packages available to provide an enterprise-grade log aggregation and correlation system.



Extreme Exploits. Advanced Defenses Against Hardcore Hacks
Extreme Exploits: Advanced Defenses Against Hardcore Hacks (Hacking Exposed)
ISBN: 0072259558
EAN: 2147483647
Year: 2005
Pages: 120

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net