UNIX and Server Forensics

‚  < ‚  Free Open Study ‚  > ‚  

Until this point, the assumption has been that the target computer is some sort of PC or workstation, probably running a version of Microsoft Windows. Regardless of the version, PC forensics is generally the same whether the computer runs DOS,Windows NT, or Windows 95. UNIX, however, offers some significant challenges to the investigator .

UNIX has both advantages and disadvantages for power users. It is a simpler operating system than Windows in that it has fewer layers between the hardware and the end user . Configuration files tend to be text-based as opposed to some sort of binary. There are literally no restrictions on the power of the superuser account. These same advantages can make UNIX both more difficult and easier to conduct forensics.

UNIX is Different

One of the fundamental differences between UNIX and PC operating systems is the general simplicity of the system. Because UNIX is an open system, internal operations of binaries, libraries, and networking protocols are well documented. The disadvantage of this is that many tools exist to modify system binaries to allow remote access, hide network traces, and conceal evidence of intrusions.

On the other hand, UNIX offers some advantages to the investigator. A drive or partition can be mounted read-only for analysis. If a trusted version of the operating system is used, the investigator has complete control over the media, including the capability to view and analyze hidden and system files.

Imaging UNIX Workstations

In some cases, imaging a workstation is no different from imaging a PC. If the workstation is x86-based, the investigator can simply insert a blank drive, boot from the forensics floppy, and use one of the tools in Chapter 8,"Forensics I," to make an image of the disk for later analysis. If the workstation has a different processor but has a disk drive that can be mounted in the forensics computer (for example, a SCSI disk), the investigator can still make an imaged copy.

UNIX Forensics Workstations

If the incident response team anticipates a requirement for UNIX forensics, it might be worthwhile to build or procure a dedicated workstation. This can be either a normal workstation (perhaps with additional tape drives or spare drive bays) or a custom-built machine.

If the company has a standard configuration and most of the likely target systems are similar, using a standard workstation is recommended. This will allow compromised drives to be examined using the same operating system and will allow direct comparisons of system binaries.

If the company is running multiple versions of UNIX, however, a custom forensics machine might be in order. Linux is recommended as the operating system because it supports multiple file systems. The file system types currently supported are listed in linux/fs/filesystems.c: adfs, affs, autofs, coda, qnx4, romfs, smbfs , sysv, udf, ufs, umsdos, vfat, xenix, and xiafs. [4]

The hardware requirements for a PC forensics workstation (as discussed in Chapter 8) are sufficient, although a SCSI card and tape drives will probably be required as well.

[4] From the Linux man pages for mount.

This copy can be examined to some extent by using the forensics software to search the disk at the hardware level for text fragments . More detailed examination, however, will not be possible without an operating system that can read the file system. The first imaged copy can be used to make multiple backup copies that can then be analyzed on another platform.

If it is not feasible to make an image this way, the investigator can use native UNIX utilities. The best utility for copying UNIX partitions is the dd command, which makes a complete copy (including deleted files and slack space). This copy can be then restored to a clean partition on the forensics machine or can be examined directly (see the sidebar about Linux loopback).

Linux Loopback Devices

LINUX supports the use of loopback devices, in which a file system can be contained inside a file. This is extremely useful when examining backups . A dd copy can be made of the target computer. That file can then be mounted in a Linux machine as a virtual file system. If the copy of the root partition from a Solaris system is contained in a file called c0t3d0s0.dd, the image can be mounted (read-only):

 mount o ro,loop,ufstype=sun t ufs c0t3d0s0.dd /mnt 

This will yield a complete copy ‚ protected from modification because it is read-only ‚ of the root partition under /mnt on the Linux system. Because Linux can read the Solaris file system, the Linux machine can then be used to examine the data. The Linux copy would, of course, contain its own binaries that are trusted for the purposes of the examination, so trojans in the Solaris binaries can by bypassed.

The rule of thumb when shutting down a personal computer is to simply pull the power cord. This is also the simplest, and probably safest, technique with a UNIX workstation as well. It does, however, have some drawbacks:

  • Any data that has not yet been flushed to disk will be lost.

  • Running processes will be lost.

  • Current network state information will not be available.

The investigator might choose to cautiously examine the machine prior to shutdown to find out information such as last logon, processes running, active network connections, and open file handles. However, the risks to a graceful shutdown are also high:

  • The attacker might have inserted trojans into the shutdown command to delete logs or other evidence.

  • The /tmp directory will be flushed.

  • Common commands might also have been modified to either destroy evidence or provide incorrect information.

The investigator must assess the benefits of securing this information prior to the shutdown and weigh this against the suspected skill of the attacker. When in doubt, a crash shutdown by pulling the plug is less likely to destroy evidence than a graceful shutdown.

In some cases, a dd copy might not be feasible. Backup copies using tar or cpio might be alternatives, understanding that significant evidence such as file slack will not be preserved. It is also possible to remove the disk drive and mount it as a separate read-only device in another machine. This is not recommended because any error by the investigator can result in the destruction of the original evidence. It is far safer to make an imaged copy and perform all analysis on the copy. If the drive can be successfully moved from one machine to another, it should be possible to make a copy as well.

UNIX Analysis

Most analysis of UNIX systems is performed using native UNIX utilities. Disks can be examined using a disk editor regardless of the operating system or file system, and it might be prudent to use one of the forensics tools from the preceding chapter for this portion of the search, if only to be able to state that the tools used were commonly accepted in the field of forensics.

Binary Files

The fundamental rule of forensics is to never examine a drive using its own operating system.With a Windows machine, this is done to prevent two major issues:

  1. The very act of using the machine will modify the disk and data in unpredictable ways.

  2. The operating system on a compromised machine might have been modified so that it either will not reveal data or will destroy it.

These two problems exist in UNIX as well, although the relative severity of the two is probably reversed . RootKit tools exist for all major UNIX variants. These tools allow an attacker to do the following:

  • Hide his or her login and command history

  • Hide any processes running under his or her control

  • Hide any files and directories owned by the attacker

  • Hide open network sockets

  • Log in at any time with an unknown name and password

These tools work by replacing such system binaries as login , ps , and ls . For this reason, all the binaries in a compromised system must be assumed to be untrusted and must not be used in the examination.

The simplest way to conduct this examination is to mount the disk (the copy, of course, not the original) as a separate read-only file system on a separate machine. Ideally, this separate machine should have the same operating system (and the same patch level) as the suspect machine so that binaries and configuration files can be compared for possible modifications. In practice, this is usually not possible.

One of the basic steps in the examination of UNIX systems is to look for compromised or modified system files. The simplest way to do this is to use a tool such as Tripwire that makes a cryptographic hash of all the critical files. If the current hash is different, the file has been modified. Obviously, this supposes that Tripwire was installed on the machine prior to the compromise.

Failing that, the second choice is a direct comparison with known files. Again, this supposes that a machine with the same configuration and operating system is available (with exactly the same patch level). This might also be impossible . Sun is developing a database of MD5 checksums for all standard releases of Solaris at various patch levels, but this tool is not yet available.

The final option is to manually inspect the system binaries. This is generally done by using the strings command to search for ASCII text within the binary. For example, the /bin/login command has no readable text within it. If the strings command provides any output, the program has been compromised. Even if an exact copy of the binary cannot be found, the strings output between a clean and a compromised copy will be similar. However, it is trivial for a sophisticated attacker to disguise the text within a binary. The absence of output should not be considered conclusive.

Configuration Files and Logs

The next step is to look at logs and configuration files on the target machine. These include system logs such as sulog , files that control services such as inetd.conf , and system text files such as /etc/passwd . If an examination of any of these logs reveals a suspicious user account, its home directory should be carefully searched for hidden files, . rhosts , or shell history files. Any odd or unusual binary files should be examined with the strings command as well.

One potential problem with using another machine to examine the system is that all file ownership information comes from the trusted machine, not the imaged disk. The investigator must manually compare user IDs and group IDs to determine the actual owner of the files. The Coroner's Toolkit, discussed in the following sidebar, provides a way to directly determine the ownership of the files.

The Coroner's Toolkit

The Coroner's Toolkit (TCT) was developed by Dan Farmer and Wietse Venema. It provides tools for the search and investigation of UNIX file systems. The three major tools are as follows :

  • grave-robber. This tool scans the file system for i-node information, which is then used by the other tools.

  • unrm and lazarus . These tools can recover deleted file space and search it for data. Unlike the forensics tools discussed in Chapter 8, these tools do not allow the investigator to preview the data. They simply recover all deleted data to an available file system.

  • mactime. This program traverses the file system and produces a listing of all files based on the modification, access, and change timestamps from the i-node information. It accepts the target system's password file as input, so it will yield the true owner of the files.

The kit is available as source code with instructions. The current version as of May 2001 is 1.06. TCT can be downloaded from www.fish.com/tct/.

System files that have been modified or that show strange ownership permissions are immediately suspect. A well-documented change-management program can be invaluable in this task. If the system documentation can enumerate all authorized changes, including the dates, the investigator's task is much simpler.

A good business continuity plan with periodic backups is also important. The standard procedure for restoring a compromised system is either to restore from a known, clean backup or to rebuild the system from clean media. Neither is possible unless backups are current and the system configuration is documented.

Servers and Server Farms

When the incident involves a large server that cannot be taken offline or that has so much storage that it cannot be successfully imaged (or that has RAID, so an image is technically not feasible), the investigator has no choice but to perform the analysis online. The best option is still to perform some sort of backup, at least of the suspected files and logs, and analyze them offline. A tape backup will not include all the information such as file slack, but this might be the only alternative.

Working on a compromised machine, especially if it is still online, is a high-risk proposition. Because none of the system binaries can be trusted and because the attacker might have planted tools to destroy evidence, the investigator must proceed very carefully. Documentation is extremely important, if only to record what happened during the investigation as a protection from future liability.

The specific steps in analyzing a mission-critical system are beyond the scope of this book. The most important step, however, is a frank discussion of the potential risks with the business managers. The system owners must be aware of all the potential dangers inherent in this action, and they must be willing to accept the risks. System owners might be under pressure to keep critical servers online. If this is the decision, it is the responsibility of the incident response team to provide management with enough information to make an informed decision. The final determination lies with business management, not with the incident response team. The following are some possible risks:

  • Unrecoverable damage to the server operating system or data

  • Loss of evidence

  • Alerting the attacker to the investigation

On the other hand, the system might be so critical that taking it offline will cause the company a greater loss (either in lost revenues or in public embarrassment). The final decision must lie with the affected business owners.

‚  < ‚  Free Open Study ‚  > ‚  


Incident Response. A Strategic Guide to Handling System and Network Security Breaches
Incident Response: A Strategic Guide to Handling System and Network Security Breaches
ISBN: 1578702569
EAN: 2147483647
Year: 2002
Pages: 103

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net