UNIX-Based Investigations

 < Day Day Up > 



UNIX File System Analysis

Briefly reviewing the UNIX file system, each disk drive is divided into one or more disk partitions with each containing a single file system. Within each file system is a list of inodes and a set of data blocks. Each inode holds almost all of the information that describes an individual file, including the size of the file, the location of disk blocks, etc. Inode numbers and their corresponding file names are stored in directory entries.

Data blocks are blocks of regularly sized data. The UNIX file system divides any data request to or from a file into logical blocks of data that correspond to the physical blocks on the disk. The downside of this methodology is that the data blocks are not necessarily contiguous to one another. Typically, the UNIX file system uses 8 KB as the size of its logical data blocks.

In UNIX when a file is deleted, the name remains in the directory, but the inode number, the name to which the name points, is removed. The inode itself is changed; consequently, the ctime is updated and the data block location is erased. As a file is deleted, UNIX decrements the inode's internal link to zero.

Removing all directory entry file name inode pairs performs this erase action. When the inode is deleted, the kernel marks its resources as available. The inode still contains all the data about the file and remains until it has been reallocated and overwritten. Having inodes containing some data but having a link count of zero reveals deleted inodes. Without having the file's content available, investigators can learn much about the file with only the metadata remaining in the directory entries and inodes.

To mount a file system, the UNIX kernel needs the sizes and locations of the file system metadata. The first piece of metadata is the super-block and it is stored at a known drive location. The super-block contains information such as the number of inodes and data blocks, size of a data block, etc. Predicated on the information contained in the super-block, the kernel is able to calculate the locations and sizes of the inode table and data portion of the disk. Inodes and data blocks are clustered together in groups scattered across the hard drive media. Usually UNIX maintains more than one super-block, one inode table, and one block array in the event of a disastrous data loss.

Undeleting UNIX

Undeleting UNIX files is different than handling Windows restoration issues. UNIX can be configured to make frequent backups, and it is not unusual for backups to be made hourly. Hopefully, there will be an easily accessed backup and it may be stored in /class/.snapshot. Examining this file will reveal that it is a backup of the home directory from several points in the past. It is possible to copy from this directory the items lost or corrupted in the regular home directory.

There is a UNIX command that should find which directory the home account is stored in: find/class/.snapshot/hourly.0 -name $USER -prune 2>/dev/null

Backups may be in the following path examples:

  • /class/.snapshot/hourly.0/01

  • /class/.snapshot/hourly.1/01

  • /class/.snapshot/weekly.0/01

  • /class/.snapshot/weekly.1/01

Retrieving lost e-mail is a very similar process. UNIX makes copies of e-mail in a similar manner as it makes a copy of the home directory. Access to copies of e-mail can be accomplished through /var/mail/.snapshot. By reviewing this directory, backups of deleted e-mail may be seen and captured. Be mindful that viewing e-mail contained in this folder will require it to be loaded into an e-mail client.

Data Hiding Techniques

Data "deleted" from UNIX files can be located by investigators with hex editors searching for specific files and file extensions. However, there is a technique where malicious individuals may attempt to "hide" data from forensic investigations. For the most part, it is effective in systems not using the Berkley Fast File System or FFS.

Experience Note 

FFS deprecated the bad data blocks inode, preventing individuals from hiding data in there.

By way of explanation, the bad blocks inode has been used to reference data blocks occupying bad sectors on the target disk thereby preventing these data blocks from being rewritten by live files. If investigators run the file system checker utility, fsck, it is possible that the file system can been seen as having been radically altered.

The first inode that is capable of allocating block resources on an ext2 file system is the bad blocks inode (inode 1) and not the root inode (inode 2). Because of this positioning, it is possible to store data on the blocks allocated to the bad blocks inode and have it hidden from many forensic tools. Malicious individuals have tools that will allow them to exploit flaws in the UNIX file system and store data outside the view of forensic tools. It is for this reason that investigators should look at the physical level of UNIX bad inode blocks before moving on to other areas.

Coroner's Toolkit

Responding to reports of critical incidents happening on UNIX-based systems, investigators establish a timeline of events beginning from the last time the system was stable and uncorrupted through the actual event and until it was discovered and brought to a halt. The Coroner's Toolkit is a collection of utilities that attempt to gather and analyze data in the target UNIX-based system where investigators can accomplish their goals.

The Coroner's Toolkit contains the following data-gathering tools:

  • grave-robber, the main data-gathering program

  • file, Ian Darwin's file command

  • icat, copies a file by inode number

  • ils, list file system inode information

  • lastcomm, a portable lastcomm command

  • mactime, the MAC time file system reporter

  • md5, the RSA-based MD5 digital signature tool

  • pcat, copies the address space of a running process

  • unrm, uncovers unallocated blocks from a raw UNIX file system

  • Lazarus, attempts to resurrect deleted files or data from raw data

Within the Coroner's Toolkit, available at www.porcupine.org/forensics/, there is a tool called unrm that can emit all the unallocated data blocks on a UNIX file system. It functions by reading the list of free data blocks, locating each logical block, and looking to see if they contain any blocks of unallocated data. The unrm should be used if investigators are looking for a specific file known to be deleted.

Another effective data recovery tool is Lazarus. Lazarus attempts to give unstructured data some structure that can be viewed and manipulated. Lazarus depends on UNIX never writing file data except in well-defined boundaries. UNIX generally writes files in contiguous data blocks, when possible, attempting to boost performance. For this reason, UNIX should never need a defragmenting utility, unlike some other popular operating systems. Lazarus maps the disk that is created and provides visibility into the disk seeing the data by content type. Lazarus is a very comprehensive program in that it takes a very broad view of deleted files.

Using unrm and Lazarus will fill significant amounts of disk space on the forensic machine. Because Lazarus takes a large view of its work, it does not run in just a few minutes, so investigators are advised to let it run for several hours, if not days, before being able to see the results.

Using unrm and Lazarus is not as easy as using a Windows file recovery tool, using them is a time consuming and laborious process. It would best be described as a "hit and miss" process.

Experience Note 

There is an interesting series of short articles about UNIX/Linux file system recoveries located at recover.sourceforge.net/linux.

Success rates restoring deleted UNIX files are spotty at best. The easiest way to restore deleted files from a UNIX system is to access an uncorrupted backup copy. UNIX administrators should back up all critical files on a regular basis observing the risk manager's axiom the more important a file is, the more often it should be backed up.

Hiding Files

Users will go to many extremes to cover their tracks. Regrettably, employees and attackers employ a variety of ways to hide their nefarious acts by concealing files, including:

  • Placing them in storage facilities located outside the workplace

  • Secreting data in hidden hard disk partitions hoping no one will think to look there

  • Encrypting data partitions of their hard drives with complex algorithms

  • Encrypting data through the use of steganography

Web sites provide storage for users and may be accessed from any system with Internet access. For a small fee, files may be securely and privately stored on remote sites ready for access at some future date. In order for investigators to locate such services on target machines, it is recommended that a thorough review is made of pertinent files resident on subject's computer. Look for URLs in the history, temporary, and bookmark files indicating that files may have been stored there. If you have a very sophisticated user who could conceal his activities, check the user logs where access is made from the inside to the outside network for online storage sites.

In the case of having files, storage media, and hard drives encrypted by a password, investigators may try to obtain access by running a password cracking application. It is important to identify the application such as Word, Excel, or WordPerfect, as many cracking applications are specific to individual programs. With password protected files, it may be possible to run a tool like John the Ripper and brute force the password.

With encrypted hard drives, if investigators can identify the protection application, there are some manufacturers that provide a universal password that permits access in the event the hard drive owner forgot her password. By way of application, the encryption key is based on the password. If the drive's content is completely encrypted, consequently, working at a physical level is not going to provide any insight. Having the password is the only way to access an encrypted drive.

Steganography

Steganography is a concealment application where information is hidden in plain sight. By secreting data in an otherwise innocuous multimedia object (usually image or sound files) called carriers, steganography can hide information remaining essentially impervious to detection. Steganographic applications accomplish their task by first hiding the data's existence in the carrier and then encrypting it. Detecting these data-carrier files is a two-prong problem; first the multimedia file containing the data must be identified and then the data must be deciphered. With a workstation containing possibly hundreds or even thousands of such files, the task is formidable indeed.

Investigators can possibly determine if they have a steganography user by locating the program on the workstation's target hard drive or locating it at another location. Finding such an application might indicate the user has encrypted files with hidden data in them. Placing image or sound files in a steganographic application will result in a password prompt. This prompt does not necessarily mean the target file contains hidden data, so these tools cannot be used to screen potential files for hidden data. Most steganographic applications will prompt for passwords whether the file has data concealed in it or not.

There is one tool that claims to be able to detect steganographically hidden .jpg files and is available at www.outguess.org/download.php.

A wide variety of steganography tools are available at members.tripod.com/steganography/stego/software.html.

Creative investigators may well be served to obtain the passwords from the owner before attempting to brute force passwords in an effort to decipher encrypted files and drives. Failing this effort, using a technique such as the installation of a keyboard monitor to capture all the user's keystrokes may prove to be the answer. Outside of law enforcement, employers may implement keystroke monitors where employees do not have any reasonable expectation of privacy in their workstation or systems use.

Experience Note 

Within the statutory constraints applicable to law enforcement agencies, search warrants and court ordered wiretaps govern the use of keyboard monitors.

Software keyboard monitors and similar tools made by Spector Soft are available at www.spector.com.

Hardware keyboard monitors are available at www.keykatcher.com/index.htm.

Locating hardware keyboard monitors is a matter of tracing desktop cabling. These are small cylindrical or box-shaped devices that are placed in-series with cables connecting keyboards to desktop towers or between the desktop and the network. Investigators should be aware that keykatcher makes a keyboard that acts as a keyboard monitor eliminating the inline device.

Finding hardware keyboard monitor information is available at www.spy-cop.com/keyloggerremoval.htm.

Finding software keyboard monitor software information is available at www.spy-cop.com/spycop-free-product.htm.

Before using such a technique, it would be prudent to consult with prosecutors or corporate legal counsel. Also, consult with your legal counsel before attempting to provide the results of a keyboard monitor to law officers.

Strong Encrypted Protections

There seems to be a growing use of encryption with very strong protection features. While there are many talented investigators and analysts in the world today, the conversion of encrypted data to plain text data in most cases is virtually impossible. Unless data may be captured from the target keyboard through legal means, investigators have to accept the idea there will be information that cannot be accessed within the limits of current technology and resources.

File Recovery Alternatives for UNIX/Linux

There is an alternative file recovery utility for Ext2 files systems used in Linux and some flavors of UNIX. It uses proprietary technology and flexible settings providing control over the data recovery process. It is called R-Linux and is available at www.rtt.com/RLinux.shtml. It is interesting to note that R-Linux is a Windows-based utility used to recover Linux data.

Understanding File Permissions

Here is a very brief refresher about file permissions. Included file permissions are Owner, Group, and the last set of permissions relevant to all other users on the system, except Superuser or root who have access to all files in the file system. In UNIX systems if ls is run, it tells you what the file is and what type of file access is granted. File access is simply defined as the ability to read, write, or execute.

  • Read "r" access means that users can open a file and read the contents.

  • Write "w" access means that users can overwrite the file with a new one or modify its contents.

  • Execute "x" access means that users can execute programs if a file has execute bits set that users with this permission can launch the program.

File types are as follows:

File Contents

Meaning


-

Plain file

d

Directory

c

Character device, e.g., printer

b

Block device, e.g., CD-ROM

These are typical file permissions: drwxr - r-r. In this example, "d" means the file context is that of directory, "rwx" is the read, write, execute access granted to the file's owner, "r -" is the access of read-only granted to the group members, and all other non-owners and non-group members have read-only access.

File Stamps

File stamps are some of the investigator's most valued reconstruction resources. Most operating systems have at least three relevant timestamps for each file. They are termed mtime (modification time), atime (access time), and ctime (change of status time). In the case of Linux, EXT2, and filesystem, it also includes a delete time.

The knowledge investigators have of these MAC (modify, access, change) times will generally determine their effectiveness in reconstructing and interpreting events:

  • Access time is exactly that, the last time a directory or file was accessed or opened for viewing.

  • Modification time is time the contents of a file were last altered in any way.

  • Changed status refers to changing information about the file, such as file ownership, permission, and group settings.

Experience Note 

In the Microsoft operating systems, the MAC is described as the Last Time Modified, Last Time Accessed, and Time Created.

Of course, there are tools that will perform timestamp reconstruction. The premier tool set for UNIX-based systems is The Coroner's Toolkit. This application is available at www.porcupine.org/forensics/tct.html. Within this set of tools is a utility called mactime, which compiles ASCII files making them suitable for viewing. After running "grave-robber," a program that captures data from a system including MAC timestamps, the mactime program is a utility used in viewing a focused portion of the timeline. Generally, mactime uses the database generated by the grave-robber program to deliver a chronological output. Mactime will deliver its output displaying all the programs executed for a given time period. This is very important to the investigator who is trying to reconstruct events, as this output reveals the relevant timestamps, the file permissions, group ID, and file name.

The typical output of mactimes is similar to the following:

Date

Size

MAC

Permissions

Owner

Group

File Name


Dec 09 02 18:15:01

2998

.a.

-rwxr-xr-x

root

finan

/usr/bin/login

41556

.a.

-rwxr-xr-x

root

finan

/usr/etc/inetd

21009

.a.

-rwxr-xr-x

root

finan

/etc/inetd

Dec 09 02 18:15:45

22756

m.c

-rw-rw-x

root

finan

/var/finadmin/lastlog

Dec 09 02 19:19:00

2147

m.c

-rw-r-r -

root

finan

/etc/passwd

Mactime can be used on networked UNIX machines as well as those that have been taken offline. It is not important for the target media times to be the same as the forensic machine. The mactime utility can also read and collect MAC times from an NTFS system.

Opening a file for reading will change the atime. Run the lstat() command before opening and note the information before opening the file for examination. Investigators should disable atime updates in the forensic machine so altering examined data does not occur.

In UNIX systems, when a file is removed, the ctime is set to the time when the last link to the file was removed noting the time when it was deleted. The inode is also deleted from the directory entry. This makes recovering UNIX file data very difficult to achieve. In most UNIX versions, if the target media can be forensically copied before the operating system synchronizes, it is possible that MAC times are preserved. In contrast, NTFS does not remove all the file information when a file is deleted, rather that information resides in the file record of the MFT, indicating to the system that the file is no longer in use and is available for overwriting.

MAC time of UNIX inodes that were once attached to files may be recovered using the "ILS" tool found in the Coroner's Toolkit. MAC times are a very important part of the puzzle that investigators need to reconstruct timetables surrounding critical events, but they are not the whole picture. If MAC times are going to be collected from a machine that is offline, collect them early, before the target machine is turned on.

Experience Note 

If an investigator were able to recover inodes information and corresponding MAC times of the unallocated file system, comparing them with the live file system; it might be possible to find the time when an intruder started changing and deleting files, and when the attacker entered and departed the system.

Baseline Comparison for SUID/SGID Files

If a system has an uncorrupted file system, for example found in the form of a backup recording, it might be possible to identify an attacker's backdoor.

Experience Note 

As a matter of course, it's a wise practice to make a comparison of SUID files between the target UNIX operating files and the "clean" backup. In discovering files outside this baseline comparison does not necessarily mean there is a backdoor; however, investigators and systems administrators must be able to account for any anomalous findings.

System Configuration

Investigators should check the/etc/syslog.conf file to determine where the system stores its logs and which events are logged. This configuration file establishes the storage facility, logging priorities, and the extent of logging.

User and Password Accounts

The file /etc/passwd identifies user account names and may display password hashes, user and group identification, user home directories, and user general information. If the /etc/passwd file contains password hashes, the system is vulnerable to password cracking. Investigators should consider this likelihood and after an initial review of a UNIX system, they should include checking for shared user ids. In reviewing the password file, note that user-id 0 is reserved for root access only. Any user-id 0 or shared user-ids should be questioned.

Log Files

These are a few of the system log files that could be encountered during an investigation:

By default, UNIX-based systems have the following log file names:

  • wtmp and wtmpx. These logs keep track of login and logouts.

  • utmp and utmpx. These logs keep track of users presently logged on the system.

  • lastlog. This log tracks users' most recent login time and records their initiating IP address.

  • sulog. This log records the usage of the switch su (substitute user).

  • syslogd. This is a daemon referring to the configuration file.

  • History file. This file records the history of recent commands used by the individual user.

  • TCP Wrappers. Uses syslogd to facilitate connect logging.

Experience Note 

Root can arbitrarily name logs in the system log file, /etc/syslog.conf. It is possible that some systems administrators have taken some latitude by renaming log files to apparently meaningless file names.

These UNIX programs are helpful to investigators and if installed, they can save investigators a lot of time in reconstructing events:

  • Check Promiscuous Mode. It checks a system for any network interfaces in promiscuous mode. This may indicate that an attacker has broken in and started a packet snooping program.

  • ifstatus. The ifstatus program was written by David Curry and checks a system for any network interface cards placed in promiscuous mode. This application is designed to be run as a scheduled event.

  • Spar. Spar is a program intended to show process accounting records and is usually installed for computer-time billing purposes. However, in skilled hands, it is a valuable tool to establish when an attacker was using system resources.

  • Tripwire. The Tripwire application is available from Purdue University. It scans designated files and computes hashes for them. At a future time, it can be used to check those files for changes indicating a possible attack.

All the above applications are available at ciac.llnl.gov/ciac/ToolsUNIXSysMon.html.



 < Day Day Up > 



Critical Incident Management
Critical Incident Management
ISBN: 084930010X
EAN: 2147483647
Year: 2004
Pages: 144

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net