‚ < ‚ Free Open Study ‚ > ‚ |
Until this point, the assumption has been that the target computer is some sort of PC or workstation, probably running a version of Microsoft Windows. Regardless of the version, PC forensics is generally the same whether the computer runs DOS,Windows NT, or Windows 95. UNIX, however, offers some significant challenges to the investigator . UNIX has both advantages and disadvantages for power users. It is a simpler operating system than Windows in that it has fewer layers between the hardware and the end user . Configuration files tend to be text-based as opposed to some sort of binary. There are literally no restrictions on the power of the superuser account. These same advantages can make UNIX both more difficult and easier to conduct forensics. UNIX is DifferentOne of the fundamental differences between UNIX and PC operating systems is the general simplicity of the system. Because UNIX is an open system, internal operations of binaries, libraries, and networking protocols are well documented. The disadvantage of this is that many tools exist to modify system binaries to allow remote access, hide network traces, and conceal evidence of intrusions. On the other hand, UNIX offers some advantages to the investigator. A drive or partition can be mounted read-only for analysis. If a trusted version of the operating system is used, the investigator has complete control over the media, including the capability to view and analyze hidden and system files. Imaging UNIX WorkstationsIn some cases, imaging a workstation is no different from imaging a PC. If the workstation is x86-based, the investigator can simply insert a blank drive, boot from the forensics floppy, and use one of the tools in Chapter 8,"Forensics I," to make an image of the disk for later analysis. If the workstation has a different processor but has a disk drive that can be mounted in the forensics computer (for example, a SCSI disk), the investigator can still make an imaged copy.
This copy can be examined to some extent by using the forensics software to search the disk at the hardware level for text fragments . More detailed examination, however, will not be possible without an operating system that can read the file system. The first imaged copy can be used to make multiple backup copies that can then be analyzed on another platform. If it is not feasible to make an image this way, the investigator can use native UNIX utilities. The best utility for copying UNIX partitions is the dd command, which makes a complete copy (including deleted files and slack space). This copy can be then restored to a clean partition on the forensics machine or can be examined directly (see the sidebar about Linux loopback).
The rule of thumb when shutting down a personal computer is to simply pull the power cord. This is also the simplest, and probably safest, technique with a UNIX workstation as well. It does, however, have some drawbacks:
The investigator might choose to cautiously examine the machine prior to shutdown to find out information such as last logon, processes running, active network connections, and open file handles. However, the risks to a graceful shutdown are also high:
The investigator must assess the benefits of securing this information prior to the shutdown and weigh this against the suspected skill of the attacker. When in doubt, a crash shutdown by pulling the plug is less likely to destroy evidence than a graceful shutdown. In some cases, a dd copy might not be feasible. Backup copies using tar or cpio might be alternatives, understanding that significant evidence such as file slack will not be preserved. It is also possible to remove the disk drive and mount it as a separate read-only device in another machine. This is not recommended because any error by the investigator can result in the destruction of the original evidence. It is far safer to make an imaged copy and perform all analysis on the copy. If the drive can be successfully moved from one machine to another, it should be possible to make a copy as well. UNIX AnalysisMost analysis of UNIX systems is performed using native UNIX utilities. Disks can be examined using a disk editor regardless of the operating system or file system, and it might be prudent to use one of the forensics tools from the preceding chapter for this portion of the search, if only to be able to state that the tools used were commonly accepted in the field of forensics. Binary FilesThe fundamental rule of forensics is to never examine a drive using its own operating system.With a Windows machine, this is done to prevent two major issues:
These two problems exist in UNIX as well, although the relative severity of the two is probably reversed . RootKit tools exist for all major UNIX variants. These tools allow an attacker to do the following:
These tools work by replacing such system binaries as login , ps , and ls . For this reason, all the binaries in a compromised system must be assumed to be untrusted and must not be used in the examination. The simplest way to conduct this examination is to mount the disk (the copy, of course, not the original) as a separate read-only file system on a separate machine. Ideally, this separate machine should have the same operating system (and the same patch level) as the suspect machine so that binaries and configuration files can be compared for possible modifications. In practice, this is usually not possible. One of the basic steps in the examination of UNIX systems is to look for compromised or modified system files. The simplest way to do this is to use a tool such as Tripwire that makes a cryptographic hash of all the critical files. If the current hash is different, the file has been modified. Obviously, this supposes that Tripwire was installed on the machine prior to the compromise. Failing that, the second choice is a direct comparison with known files. Again, this supposes that a machine with the same configuration and operating system is available (with exactly the same patch level). This might also be impossible . Sun is developing a database of MD5 checksums for all standard releases of Solaris at various patch levels, but this tool is not yet available. The final option is to manually inspect the system binaries. This is generally done by using the strings command to search for ASCII text within the binary. For example, the /bin/login command has no readable text within it. If the strings command provides any output, the program has been compromised. Even if an exact copy of the binary cannot be found, the strings output between a clean and a compromised copy will be similar. However, it is trivial for a sophisticated attacker to disguise the text within a binary. The absence of output should not be considered conclusive. Configuration Files and LogsThe next step is to look at logs and configuration files on the target machine. These include system logs such as sulog , files that control services such as inetd.conf , and system text files such as /etc/passwd . If an examination of any of these logs reveals a suspicious user account, its home directory should be carefully searched for hidden files, . rhosts , or shell history files. Any odd or unusual binary files should be examined with the strings command as well. One potential problem with using another machine to examine the system is that all file ownership information comes from the trusted machine, not the imaged disk. The investigator must manually compare user IDs and group IDs to determine the actual owner of the files. The Coroner's Toolkit, discussed in the following sidebar, provides a way to directly determine the ownership of the files.
System files that have been modified or that show strange ownership permissions are immediately suspect. A well-documented change-management program can be invaluable in this task. If the system documentation can enumerate all authorized changes, including the dates, the investigator's task is much simpler. A good business continuity plan with periodic backups is also important. The standard procedure for restoring a compromised system is either to restore from a known, clean backup or to rebuild the system from clean media. Neither is possible unless backups are current and the system configuration is documented. Servers and Server FarmsWhen the incident involves a large server that cannot be taken offline or that has so much storage that it cannot be successfully imaged (or that has RAID, so an image is technically not feasible), the investigator has no choice but to perform the analysis online. The best option is still to perform some sort of backup, at least of the suspected files and logs, and analyze them offline. A tape backup will not include all the information such as file slack, but this might be the only alternative. Working on a compromised machine, especially if it is still online, is a high-risk proposition. Because none of the system binaries can be trusted and because the attacker might have planted tools to destroy evidence, the investigator must proceed very carefully. Documentation is extremely important, if only to record what happened during the investigation as a protection from future liability. The specific steps in analyzing a mission-critical system are beyond the scope of this book. The most important step, however, is a frank discussion of the potential risks with the business managers. The system owners must be aware of all the potential dangers inherent in this action, and they must be willing to accept the risks. System owners might be under pressure to keep critical servers online. If this is the decision, it is the responsibility of the incident response team to provide management with enough information to make an informed decision. The final determination lies with business management, not with the incident response team. The following are some possible risks:
On the other hand, the system might be so critical that taking it offline will cause the company a greater loss (either in lost revenues or in public embarrassment). The final decision must lie with the affected business owners. |
‚ < ‚ Free Open Study ‚ > ‚ |