Project26.Sort and Compare Text Files


Project 26. Sort and Compare Text Files

"How do I compare two files for common lines?"

This project shows you how to sort the contents of text files, pick out commonalities and differences between sorted files, and how to strip duplicate lines of content. It covers the commands sort, comm, and uniq.

Sort Files

First, we'll take a look at the sort command, which sorts the lines of a file into order and can also merge several sorted files. We'll demonstrate its most useful features by sorting a directory listing into file-size order.

Then, for our next trick, we'll demonstrate the use of sort, uniq, and comm in combination to detect identical image files that differ only in name. uniq removes duplicate lines from a sorted file. comm compares two sorted files, displaying lines that are common to both, and lines that occur in just one file or the other, in three columns.

The sort command sorts the lines of a text file into order. To demonstrate some of its features, let's sort a directory listing into file-size order. (This can be done by ls -lS, of course, but for the purpose of this exercise, we'll use ls merely to generate a file that's a suitable candidate for sort.)

We'll generate the file by piping a directory listing to awk and telling awk to print the ninth field (the filename) in a left-aligned 20-character column, followed by the fifth field (the file size in bytes).

$ ls -l | awk '{printf "%-20s %s\n", $9, $5}' GM-river-1.psd       34730919 GM-river-2.psd       35552718 Natacha-1.psd        26613850 Natacha-2.psd        21511927 ferdi-coll3.psd      16159247 ferdi-cool.psd       31266062 ...


Learn More

See Project 60 to learn more about awk 's formatting capabilities.

Project 6 explains redirection and pipelining.


Next, we pipe the "file" directly to sort, which in the absence of a filename accepts the output from awk as its standard input. The sort command must be told, via option -k, which field is the key to sort on. In this case, the key (file size) appears in field 2 of the input, so the appropriate syntax is -k2.

$ ls -l | awk '{printf "%-20s %s\n", $9, $5}' | sort -k2 jj2.psd              4645653 kids.jpg             11532 lips.psd             13333630 monsta.psd           1037986 jj12b8.psd           6102391 lor-sep.psd          20337917 ...


That wasn't exactly as we expected. A gotcha with sort concerns the field separator, which is a space or tab. Because our file has multiple spaces between the fields, sort must be told to treat multiple spaces as a single file separator, not many consecutive separators. To this end, specify option -b.

$ ls -l | awk '{printf "%-20s %s\n", $9, $5}' | sort -k2 -b monsta.psd           1037986 kids.jpg             11532 lor-stone2.psd       12178244 lips-red.psd         13333630 lips.psd             13333630 ...


Better, but sort is treating the "number" as a sequence of characters and not considering its numerical value. Another option, -n, is called for.

$ ls -l |awk '{printf "%-20s %s\n", $9, $5}' |sort -k2 -b -n kids.jpg             11532 monsta.psd           1037986 ferdi-polish.psd     1676649 ferdi-gala.psd       1780828 gala-hair.psd        2616768 ...


Perfect! That does what we want. Check out the man page for sort; it has many more options, one of the most useful being -t to specify a field separator other than space or tab.

Detect Duplicate Files

Now we'll look at uniq and comm and combine them with sort to detect duplicate image files. Suppose that we have a directory with many image files. Some files are identical to others, though their names do not necessarily indicate this. We want to detect the duplicates so we can delete them.

An image can be uniquely identified by a combination of its checksum and its size. Different images have different checksums; duplicate images have the same checksum. (There's a chance in a billion that two different images will have the same checksum.) We'll use command cksum (demonstrated below), which outputs a file's checksum in field 1, the file's size in field 2, and its filename in field 3:

$ cksum gala-hair.psd 1428733187 2616768 gala-hair.psd


Using cksum, we'll identify the nonduplicate filesquite the opposite of what is required, but we'll correct that later. We'll create a temporary file, /tmp/checksums, that contains the filename, checksum, and size of each image, sorted by checksum; then we'll use command uniq to filter out duplicate lines.

To create the checksum-sorted file, we'll use awk to rearrange the cksum output fields, placing the filename (field 3) first. (This will be required by uniq, as we shall see later on. The final stage of the pipeline uses sort to sort on field 2 (the checksum).

$ cksum * | awk '{print $3, $1, $2}' | sort -k2 > ¬     /tmp/checksums $ cat /tmp/checksums gala-hair.psd 1428733187 2616768 gala-longhair.psd 1428733187 2616768 gala-longhairs.psd 1428733187 2616768 lor-sep.psd 1806568675 20337917 ferdi-polish.psd 1872815908 1676649 ferdi-cool.psd 2145153277 31266062 GM-river-2.psd 2170060896 35552718 GM-river-1.psd 2176652953 34730919 lor-stone.psd 2485586841 4266597 lor-stones.psd 2485586841 4266597 jj2.psd 2796515583 4645653 lips-red.psd 996895841 13333630 lips.psd 996895841 13333630


Next, we run the temporary file through the uniq command. If two or more adjacent lines are the same, uniq outputs only the first. We want to ignore filenames in this comparison, as we are interested only in the uniqueness of checksum and size, so we'll use uniq option -f n, which tells uniq to skip the first n fields before lines are compared. This is why we put the filename first earlier. (Note that the filename is needed; otherwise, we can't identify the file to delete later.)

To keep only unique files in the list, we use

$ uniq -f1 /tmp/checksums gala-hair.psd 1428733187 2616768 lor-sep.psd 1806568675 20337917 ferdi-polish.psd 1872815908 1676649 ferdi-cool.psd 2145153277 31266062 GM-river-2.psd 2170060896 35552718 GM-river-1.psd 2176652953 34730919 lor-stone.psd 2485586841 4266597 jj2.psd 2796515583 4645653 lips-red.psd 996895841 13333630


Looking at the two listings above, it'd be easy to spot and delete the four duplicate files that are listed in /tmp/checksums but not in the uniq-filtered list. It wouldn't be quite so easy if we were working with 1,000-plus images, however! We need to compare the two lists automatically and pick out lines that appear in the first but not in the second. The comm command does exactly that. It compares two sorted files and outputs three columns: Column 1 contains lines found only in file1; column 2, lines found only in file2; and column 3, lines in both files. Options 1, -2, and -3 suppress printing of their respective columns.

Tip

Many commands take dash ( - ) as an input filename, which tells them to read standard input (which might be a pipeline) instead of a file.


Therefore, we pass to comm the full listing in /tmp/checksums as file1 and the uniq-filtered list as file2, both sorted lexically as required by comm, and suppress columns 2 and 3 from being output. We use a neat trick whereby we pipe the sorted list directly to comm's standard input instead of saving it to a temporary file. The second filename is given as dash, which tells comm to read its standard input.

Our final command displays just column 1which is a list of those files that were filtered by uniq. Thus, we are left with the duplicate images.

The commands we have built up are

$ cksum * | awk '{print $3, $1, $2}' | sort -k2 > ¬     /tmp/checksums $ uniq -f1 /tmp/checksums | sort > /tmp/checksums-uniq $ sort /tmp/checksums | comm -23 /tmp/checksums-uniq gala-longhair.psd 1428733187 2616768 gala-longhairs.psd 1428733187 2616768 lor-stones.psd 2485586841 4266597 lips.psd 996895841 13333630


The output lists the duplicate files we can safely delete.

Finally, we want to remove each filea goal easily achieved by adding a few more stages to the pipeline. Use awk to pass just the filename (field 1) on to the next stage in the pipeline and xargs to form a delete (rm) command from the list of filenames passed to it. The complete solution is shown below. Note that the rm command is preceded by echo. This causes the command line to display what would otherwise be executeda sensible precaution until we are sure that the pipeline is forming the correct command line. when we have verified that the command looks good, we can remove echo and commit to deleting the duplicate images.

Note

This example will not work with filenames that contain spaces. Spaces in filenames are often problematic and require special treatment with regard to field separators.


A Shorter Version?

The eagle-eyed among you may have spotted that uniq has an option -d, which apparently does exactly what we needed hereit lists the non-unique files instead of the unique files. Unfortunately, it fails when a file has two or more duplicates.

$ cksum *.psd | awk ¬     '{print $3, $1, ¬     $2}' | sort -k2 ¬     | uniq -f1 -d gala-hair.psd 1428733187 2616768 lor-stone 2485586841 4266597 lips-red.psd 996895841 13333630


Only three duplicates have been detected instead of four.


The final command becomes

$ sort /tmp/checksums | comm -23 - /tmp/checksums-uniq ¬     | awk '{print $1}' | xargs echo rm rm gala-longhair.psd gala-longhairs.psd lor-stones.psd lips.psd


Here's an alternative way to delete the files. We'll enclose the pipeline sequence in $(), which tells Bash to execute it, write the result back to the command line, and then execute the new command line.

$ echo rm $(sort /tmp/checksums | comm -23 - ¬     /tmp/checksums-uniq | awk '{print $1}') rm gala-longhair.psd gala-longhairs.psd lor-stones.psd lips.psd





Mac OS X UNIX 101 Byte-Sized Projects
Mac OS X Unix 101 Byte-Sized Projects
ISBN: 0321374118
EAN: 2147483647
Year: 2003
Pages: 153
Authors: Adrian Mayo

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net