Section 3.6 Disk Sniffing

   


3.6 Disk Sniffing

graphics/fourdangerlevel.gif

In "Stopping Access to I/O Devices" on page 268, "Stopping Uncontrolled Access to Data" on page 72, and "Finding Permission Problems" on page 59 the problem of nonroot users being able to read (or alter) your users' confidential data is solved. But what about someone who gains root access? Will he really be able to sniff the disk for credit card numbers even though the temporary files were removed?[12] What if the boss asks, "Can you make sure that my file whizbang.mm really is gone?" What if one salesman asks if another salesman can see e-mail after it has been sent (and removed from the first salesman's outgoing mail archive)?

[12] Hopefully, no files of credit card data are being maintained on the Web server or accessible to the corporate network. A very secure technique for safeguarding this credit card database is discussed in "One-Way Credit Card Data Path for Top Security" on page 302.

If special precautions have not been taken, the answer is that someone operating as root can sniff the disk and possibly find this confidential data. This is because when a file is removed from the system, its blocks containing data are marked "not in use" but the existing data in these data blocks are not overwritten. The use of grep on the raw disk partition will find this data very easily.

Linux operates with the assumption that root is trusted because, since root is all powerful, there is no alternative. Very few programs worry about this because if any untrustworthy person is operating as root they can sniff the data before the files are removed. Although root can sniff memory, keyboard strokes, etc., this data is transitory; disk data can remain for a long time.

3.6.1 Truly Erasing Files

graphics/fivedangerlevel.gif

This lack of data destruction is a problem if a user wants to remove a confidential file and ensure that no one can see its contents on disk at any future time, including root. The preferred solution is to overwrite the file's blocks before removing the file. An alternative is to overwrite all the free blocks on the file system to ensure that the free blocks holding the confidential data get overwritten. This alternative is discussed later in this section. This alternative (overwriting all the free blocks) is good for solving this problem after the fact when someone asks you about this after he already has removed the file. This alternative also will work in cases of files being removed by programs that you do not have control over or do not want to modify. The sendmail program comes to mind here.

Let us consider in-depth the premeditated destruction of data. To ensure that a file system's free blocks do not contain removed confidential data, you need to write over those blocks, and this requires some understanding of the ext2 file system. It is an improvement on the Berkeley Fast File System which is an improvement to the venerable UNIX File System dating back to the early 1970s. User joe wants to ensure erasure of the document nomerger.mm in /home/joe. This document has confidential details of a failed merger proposal with Pentacorp.

A simple rm nomerger.mm will not work. This is because even the simple command

 
 grep -a -100 -b Pentacorp /dev/hda3 | more 

will search the raw disk device to find blocks with Pentacorp in them, including freed blocks. The -100 flag will show 100 lines before and after each matched line and the -b flag will show the byte offset in the device file /dev/hda3 so that the spy later can use dd to look for blocks all around matched ones. You can try this yourself and see it work. Some people incorrectly assume that truncating a file (> nomerger.mm in bash or cp /dev/null nomerger.mm) will work. It will not. It simply will free the blocks. Again, grep can be used to prove this.

To actually stomp on this data easily, you need to write over the blocks while they still are allocated to the file. One way to do this is with the use of C code or a Perl script. The program discussed here will accomplish this and the code may be integrated into other programs (subject to the stated license restrictions). The program is called overwrite.c and is available on the CD-ROM. It works by using the open() system call to open the specified file for writing. The creat() system call would first truncate the file to zero length (and mark those blocks "free" without overwriting them). However, the open() system call allows you to access the existing blocks.

Because Linux allows a program to treat any regular disk file as a random I/O file, a program may write over the existing blocks or parts of them. The kernel's implementation of this is that the existing disk block numbers are used. The overwrite program relies on this implementation. It opens the file for writing and determines how large it is, in bytes. It works on files up to 2 GB in size.

Then the program uses lseek() to position the starting location for I/O to be the beginning of the file. It then overwrites the entire file, 1 KB at a time, with NUL bytes. Recall that the C language specifies that statically declared data (data declared outside a function or declared static) will be initialized to NULs (binary zeros).

Should you take my word that this program works? Of course not. I tested this program by first creating a file called foo on a file system on /dev/hdc1. I created it via

 
 cat /etc/passwd /etc/group /etc/inetd.conf > /mnt/foo 

I then issued the command

 
 debugfs /dev/hdc1 

Then, at the

 
 debugfs: 

prompt, I entered

 
 stat /foo 

One may quit out of debugfs with the "q" command or with Ctrl-D. The debugfs program understands the structure of the ext2 file system and allows analysis and even repair of severely corrupted file systems.

I used debugfs to recover 95 percent of a client's Linux system after he caused "rm -rf /" to occur unintentionally, when he told the system to remove an account name he did not recognize. This account happened to be a system account with a home directory of "/". He had no backups of important work. Certainly the GUI program was poorly designed; I had advised him previously to start doing backups.


Because you will not be specifying -w that would allow writing to the file system, it is safe to invoke debugfs while /dev/hdc1 is mounted. Upon startup, debugfs displays some information about the file system and then prompts with debugfs:. I then issued the command stat /foo. Recall that all file names given to debugfs are relative to the mount point, /mnt in this case. This shows all information about that file, for example, the "inode" (short for information node).

This information includes a list of the disk block numbers (relative to the start of that partition) that contain the data in the file. In this case for me it showed

 
 Inode: 13   Type: regular   Mode:  0644   Flags: 0x0 Version: -665893048 User:     0  Group:     0   Size: 4387 File ACL: 0    Directory ACL: 0 Links: 1   Blockcount: 10 Fragment:  Address: 0    Number: 0    Size: 0 ctime: 0x38f2390c -- Mon Apr 10 16:26:52 2000 atime: 0x38f2390c -- Mon Apr 10 16:26:52 2000 mtime: 0x38f2390c -- Mon Apr 10 16:26:52 2000 BLOCKS: 1251781 1251782 1251783 1251784 1251785 TOTAL: 5 

I then exited debugfs with Ctrl-D, invoked overwrite /mnt/foo; sync to force any in-memory disk buffers to disk, and reissued the debugfs command. The results were:

 
 Inode: 13   Type: regular   Mode:  0644   Flags: 0x0 Version: -665893048 User:     0   Group:     0   Size: 4387 File ACL: 0    Directory ACL: 0 Links: 1   Blockcount: 10 Fragment:  Address: 0    Number: 0    Size: 0 ctime: 0x38f23e53 -- Mon Apr 10 16:49:23 2000 atime: 0x38f2390c -- Mon Apr 10 16:26:52 2000 mtime: 0x38f23e53 -- Mon Apr 10 16:49:23 2000 BLOCKS: 1251781 1251782 1251783 1251784 1251785 TOTAL: 5 

As you can see, the same blocks are in the file and in the same order, giving convincing evidence of the correctness of the program. A subsequent octal dump verified that the data was overwritten:

 
 od /mnt/foo 0000000 000000 000000 000000 000000 000000 000000 000000 000000 * 0010440 000000 000000 0010443 

One might worry that the blocks have been freed, and then reallocated in the same order only because the file system was not otherwise active. This theory may be tested by modifying the overwrite.c program to also open some temporary file and alternate 1 KB writes between the two files.

Even if you write over the blocks that contained confidential data, a "moderately funded" opponent would have no trouble reading the last two or three things that were written onto the disk, using a technique called Magnetic Force Microscopy (MFM). If the nature of your data is such that this is a concern, the program that you can use to prevent this is called Wipe. It repeatedly overwrites particular patterns to the files to be destroyed, causing these "garbage" patterns to be the last few layers. This prevents anyone from ever reading your confidential data.

Wipe may be downloaded from the following place and is available on the CD-ROM:

www.debian.org/Packages/unstable/utils/wipe.html


An alternative to wipe is to use an encrypted file system or to store files on disk only in encrypted form. The latter technique is harder to get right as even an unencrypted editor temporary file or data in the swap partition would be a security breach. Encrypted file systems are covered in "Encrypted Disk Driver" on page 274.

3.6.2 Destroying Old Confidential Data in Free Blocks

graphics/fivedangerlevel.gif

User joe wants to ensure erasure of the document nomerger.mm in /home/joe. Joe already did rm nomerger.mm so he cannot use the solution discussed earlier that uses the overwrite program. This document has confidential details of a failed merger proposal with Pentacorp.

In our example here, the following is a good start. The reason this is a good start and not a solution is discussed next.

 
 dd bs=1024k if=/dev/zero of=/home/joe/junk df rm /home/joe/junk 

Certainly, you should check with other users and any other SysAdmin first, to ensure that temporarily filling up the disk partition will not cause anyone's processes to fail. The df command is available to ensure that there are no blocks left on the device.

This process is trustworthy only if root does it because some disk space normally is reserved for root, and, depending on random variables, the confidential data might be in the last blocks to be allocated. In this case, the blocks may not be available to nonroot users for allocation for overwriting.

There are two gotchas to guard against. The first is the resource limit. Note that this feature is implemented differently than the UNIX ulimit facility; some UNIX scripts and programs using ulimit or ulimit() will not work.

The limit feature limits the maximum size file that a process may create. It is intended to prevent a runaway process (such as cat foo | cat >> foo) from filling up the disk accidentally. By default, some Linux systems specify a limit on the order of 1 GB. To see your limit, issue the tcsh commands

 
 limit limit -h 

and look at the line starting "filesize." Either of the following is typical. Note that the first invocation shows the current limits for the process, called the soft limit. The second invocation (with -h) shows the maximum limits, called the hard limit. A nonroot user may increase any soft limit up to the value of the hard limit.

 
 filesize        1048576 kbytes 

or

 
 filesize        unlimited 

For those using bash (/bin/sh) the command is ulimit; it defaults to the soft limit for maximum file size. Either of the following bash commands will display this soft limit in blocks (typically 1 KB units but it may be in units as large as 8 KB).

 
 ulimit 

or

 
 ulimit -S 

The hard limit may be displayed with this bash command:

 
 ulimit -H 

Either of these limits may be changed by specifying the new limit in blocks.

 
 ulimit -S 1000000 ulimit -H 1000000 

If these limits are encountered during the dd command, the following would be the expected error message.

 
 dd: /home/joe/junk: File too large 

Under tcsh, if you see limits on file size, you will want to remove them, as shown here:

 
 unlimit -h filesize ; unlimit filesize 

The other gotcha is not remembering that, up through and including the 2.2 versions of the Linux kernel, the maximum size of a file on an ext2 file system is 2 GB. This means that on large partitions you will need to create multiple junk files and not remove any of them until all of them have been created. The following script, called fillup, takes the name of a directory to work under and will work with up to a 100 GB partition.

 
 #!/bin/csh -f # This script will overwrite up to 100 GB of # free blocks on a file system to write over # any possible confidential data.  It will # work on Linux systems where there is a # maximum file size of 2 GB. # /usr/local/bin/fillup: fillup a file system to # to obliterate any possibly confidential data # that might be in the free blocks after the # files containing it were removed. # It expects a single argument which is # a directory on the file system to be # filled up.  When it is done it will # invoke df and prompt the SysAdmin to # verify that the df shows no free disk # space and to hit Enter. set fname="$1/junk$$" if ( $#argv != 1 ) then          echo "Usage: $0 directory"          exit 1 endif if ( ! -d $1 ) then          echo "$1 is not a directory"          exit 1 endif if ( ! -o /bin/su ) then          echo Not root          exit 1 endif unlimit -h filesize unlimit filesize df # $i is quoted to protect against * foreach i ( ${fname}{x,y}{1,2,3,4,5}{a,b,c,d,e} )          echo Filling "$i"          dd bs=1024k if=/dev/zero of="$i"          df end echo "Verify that the last 'df' shows no free space" echo -n '  then hit Enter to remove junk files: ' set z="$<" foreach i ( ${fname}{x,y}{1,2,3,4,5}{a,b,c,d,e} )          echo Removing "$i"          /bin/rm -f "$i" end df exit 0 

3.6.3 Erasing an Entire Disk

graphics/threedangerlevel.gif

Erasing all the data from a disk is not as easy as it sounds. While it might be fun to type rm -rf / it will not destroy the data. A client of mine caused

 
 rm -rf / 

to be executed accidentally. With a few days' effort I was able to recover 95 percent of his files, thanks to the good design of the ext2 file system and the debugfs program.

The debugfs program is like gdb (the GNU program debugger) for the ext2 file system. It is an excellent reason to ensure that only root can read or write your raw disk devices (/dev/hd* and /dev/sd*).

To erase an entire disk, say, /dev/hdb, issue the following command as root. No unlimit or limit command is needed, because those limits only apply to regular files under a file system.

 
 dd bs=1024k if=/dev/zero of=/dev/hdb 

The dd command allows a larger buffer size than cp (bs=1024k) for faster operation, and it has good error reporting. It is important to verify that the number of blocks written (megabytes in this example) accurately reflects the formatted size of the disk. This ensures that a disk write error did not cause a premature termination of the program.

Note that because we are not working with ordinary files on file systems, but with devices, the 2 GB maximum file size limit of pre-2.4 kernels and the resource limit do not apply. Speeds on the order of 250 MB/minute may be expected on older IDE disks and higher speeds on newer disks and SCSI devices.

Note that mkfs /dev/hdb will not overwrite all of its data.

3.6.4 Destroying a Hard Disk

graphics/threedangerlevel.gif

What if you are unable to access a disk either because its interface is so ancient that you cannot connect it to a computer or, perhaps, the electronics are broken but the data is so confidential you do not want to risk it falling into the wrong hands? If this is the case, your organization probably has secure disposal procedures.

In the absence of organization policy, a sufficiently strong degaussing magnet will suffice for all but the spooks (intelligence operatives). You would need to open up the disk enclosure to remove any protective shielding and would need to contact the disk manufacturer to determine how strong a magnet is required. An efficient and sure alternative is to use sandpaper to remove the magnetic covering from the aluminum substrate and reduce it to a powder. For a more "kosher" solution that meets US DoD and NSA requirements and is GSA approved, visit

www.semshred.com


       
    Top


    Real World Linux Security Prentice Hall Ptr Open Source Technology Series
    Real World Linux Security Prentice Hall Ptr Open Source Technology Series
    ISBN: N/A
    EAN: N/A
    Year: 2002
    Pages: 260

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net