21.6 Large Networks

 <  Day Day Up  >  

A company with a large IT department and a dedicated security staff is in a unique position in relation to security response. On the one hand, they have more resources (human and financial) and can accomplish more in terms of security; on the other hand, they have more eggs to watch, in many different baskets . They will likely spend more effort preparing for potential incidents and will often have the infrastructure to identify and contain them.

The theme for a large company's security response is often cost effectiveness : "How do we accomplish more with less? How do we stay safe and handle the threats that keep appearing in ever-increasing numbers ? What do we do when the safeguards fail and the enterprise is faced with a major security crisis?" These questions can be answered by a good security plan based on the SANS six-step process.

A large network adds complexity to the security posture ”and having complicated perimeter defenses and thousands of internal machines on various platforms does not simplify incident management. Firewalls, IDSs, various access points (e.g., dial-up servers, VPNs), and systems on the LAN generate vast amounts of security information. It is impossible to respond to all of it. In addition, few of the events mean anything without the proper context: a single packet arriving at port 80 of the internal machine might be somebody from within the LAN mistyping a URL (not important), or it could be a port-scan attempt within the internal network (critical importance) or misconfigured hardware trying to do network discovery (low importance).

Using automated tools to sort through the incoming data might help to discover hidden relations between various security data streams. The simplest example is the slow horizontal port scan ”port 80 on IP, then port 80 on, and so on ”as opposed to a sequential port scan with port 80 on, then port 81 on, and so on. A single packet arriving at the port will most likely go unnoticed if the observer is only looking at an individual device's output, while the evidence of a port scan becomes clear with correlation. Thus, it makes sense to use technology to intelligently reduce the audit data and to perform analysis in order to selectively respond to confirmed danger signs. Commercial Security Information Management (SIM) solutions can achieve this.

In a large environment, the security professional may be tempted not only to automate the collection and analysis of data but to save even more time by automating incident response. A certain degree of incident response automation is certainly desirable. A recent trend in technology merges SIM solutions with incident workflow engines and aims to optimize many of the response steps. However, an automated response can cause problems (see http://online.securityfocus.com/infocus/1540) if deployed carelessly. Difficult-to-track problems might involve creating DoS conditions on a company's own systems.

Incident response in a large corporate environment should have a distinct containment stage, since many organizations still adhere to the "hard outside and soft inside" architecture rather than one based on defense- in-depth . Thus, promptly stopping the spread of damage is essential to an organization's survival.

On the investigative side, a large organization is likely to cooperate with law enforcement and try to prosecute attackers . For certain industries (such as finance), reporting incidents to law enforcement is mandatory. As a result, the requirements for audit trails are stricter and should satisfy the standard for court evidence handling (hard copies locked in a safe, raw logs kept, etc.). You can learn more about law enforcement investigative procedures for computer crimes in the article "How the FBI Investigates Computer Crime" (http://www.cert.org/tech_tips/FBI_investigates_crime.html).

Overall, a large company's security response concentrates on intelligently filtering out events and developing policies to make incident handling fast and effective, while focusing on stopping the spread of the attack within internal networks. An internal response team might carry the burden of investigation, possibly in collaboration with law enforcement.

21.6.1 Incident Identification

Depending upon how far you want to go to improve the detection capabilities of your computer system, consider solutions ranging from installing a full-blown network intrusion detection system, such as Snort, to doing nothing and relying on backups as a method of recovery. The optimal solution is somewhere in the middle of these extremes.

On Unix/Linux, an integrity-checking program helps a lot. Such programs can pinpoint all changes that have occurred in the filesystem. Unfortunately, malicious hackers have methods that can deceive those tools.

Here, we illustrate how easy is to use such tools. For example, let's consider AIDE (a free clone of Tripwire with a much simpler interface). AIDE runs on Solaris, Linux, FreeBSD, Unixware, BSDi, OpenBSD, AIX, and True64 Unix. To use AIDE, perform the following steps:

  1. Download the source from its home site (http://www.cs.tut.fi/~rammer/aide.html) or from any of the popular Linux RPM sites (a binary RPM package is available for Linux).

  2. Install or compile and install it as follows . To install:

     rpm -U aide-0.8-1.i386.rpm 

    To compile and install:

     tar zxf aide*gz; cd aide-0.8; ./configure; make    ; make  install 
  3. In order to create a database with a list of all file parameters (sizes, locations, cryptographic MD5 checksums) run aide -init . It is crucial to perform this step on a known clean system ”e.g., before connecting the system to the network for the first time. Only a clean baseline allows reliable incident investigation in case of a compromise.

  4. To check the integrity of your system, run aide -check .

  5. To update the database upon introducing some changes to your system, run aide -update .

To use the tool for effective security, you must safeguard the resulting database ( /var/aide/aide.db ) as well as the tool's binary file (such as /usr/bin/aide ) and related libraries. Copy it on a separate diskette to be used in case of an incident.

21.6.2 Aggressive Response

We have covered some of the basics of incident response in this chapter. Now, let's address the absolute taboo of incident response: namely, the desire to hack back. If you feel like retaliating, get the attacker's IP address, run it through a whois service (either a program or an online service such as http://www.SamSpade.org), and report the intruders to their Internet Service Provider or, if their ISP supports (or tolerates) hacking, to their upstream ISP. While certain branches of the government and the military are allowed and even encouraged to hack back, such actions are not appropriate for corporate security professionals. The possible risks far outweigh the gains.

21.6.3 Recovery

Backup, backup, backup. Recovery is much simpler if you can just plug in a CD-ROM with yesterday 's (or a week-old) copy of your data and continue from there. However, imagine that a malicious virus destroyed your collection of MP3s and that your hamster ate your backup CD. Is all hope lost? The short answer is yes. We are only half-joking, since there is no guarantee that any material will be recovered.

In Windows 9x/ME, there are tools that provide reliable file undeletion, if they are used a short time after the file is destroyed. How the file was destroyed makes a difference during recovery attempts. For example, one known worm overwrote files with zero content, without removing them. In this case, most available Windows undelete utilities failed, since they are designed to recover files that are deleted and not replaced with zero- sized copies.

In Windows NT/2000/XP, there is a chance of recovery as well. If NT/2000 was installed on a FAT partition (the same as Windows 9x uses), the files can probably be recovered. In NTFS, the chances for recovery are much lower.

The Unix situation is even worse . An old Unix reference once claimed that on Unix there are no "problems with undeleting removed files" for the simple reason that "it is impossible." In reality, undeleting is not entirely impossible, but to do so requires spending time with forensics tools that often find only pieces of files, and then only after extensive content-based searching. Such a process is also Unix vendor, version, and flavor-dependent. For example, RedHat Linux versions up to 7.2 allowed easy undeletion using tools such as e2undel and recover (based on a Linux Undeletion HOWTO available at http://www.linuxdoc.org). However, due to some changes in filesystem code, what was once easy is no longer possible. Overall, Unix file recovery falls firmly into the domain of computer forensics (see Chapter 22).

Briefly, The Coroner's Toolkit (TCT) gives you a finite chance to restore files on Solaris, SunOS, FreeBSD, OpenBSD, and Linux (of course). TCT is the most popular Unix forensics tool. A newer competitor has been released by Brian Carrier (from @Stake): the TASK toolkit incorporates TCT functionality with the TCT-Utils package (also by Brian Carrier). The undeletion functionality of TCT+hat works on all supported Unix flavors is the unrm/ lazarus combo.

Overall, the undeletion procedure for these tools is as follows:

  1. Become root on your system.

  2. Determine which filesystem the file was erased from (if you lost /home/you/important.txt and your df command tells you /dev/hda5 is mounted as /home , then the file was on partition /dev/hda5 ).

  3. Unmount the above partition or even take the disk out and install it in a different machine. Another good solution is to make an image (bit-by-bit or forensic) copy and operate on it. Use a different machine for recovery. The goal is to make sure the file is not overwritten by your recovery effort.

  4. Run the unrm tool on the above partition:

     #  ~/tct-1.09/bin/unrm /dev/hda5 > /tmp/all-data  

    Make sure /tmp is not part of /dev/hda5 !

  5. Now run lazarus:

     #  ~/tct-1.09/lazarus/lazarus -r /tmp/all-data  
  6. Start up your browser and open the file ~/tct-1.09/www/all-data.frame.html . You should be able to look at all deleted files (with no names ) by type.

  7. As an alternative to step 6, you can go to ~/tct-1.09/blocks and look for your file based on size and type. Run various commands (such as grep and file ) to locate the file in the sea of removed file chunks .

Unfortunately, this procedure is not guaranteed to work. Success greatly depends on a combination of luck (the most important factor), the amount of time that has passed since file deletion, and your knowledge of the file parameters. It is much easier to recover text files, since you can just use grep within a block to look for the file content.

 <  Day Day Up  >  

Security Warrior
Security Warrior
ISBN: 0596005458
EAN: 2147483647
Year: 2004
Pages: 211

Similar book on Amazon

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net