Text sorting

Introduction
One interviewer asked me how would I sort lines in a 5GB text file, if I had only 1GB of RAM. The task seemed to me quite interesting and I couldn't resist from implementing it the next morning.

Background 
I wish I had some background on it, but I really don't know much about external sorting algorithms. I wrote this code mostly using intuition, so I would appreciate if you share your knowledge on more advanced techniques in comments.

Algorithm 
In order to sort the file I first split it to smaller files. On the first iteration I run through the input file and find all lines that start from the same character. If some resultant files are still larger than the amount of available memory, I split them by first two characters and so on. Then each smaller file is being sorted in memory and saved on the disk. The final stage is merging the sorted list of sorted files into a single file.

Example. Consider the following text file:
   apricot
   apple
   orange
   milk
   avocado
   meat
For simplicity let's assume that we run very ancient computer and can't afford sorting files more than 10 bytes long. Our input file is larger, so we need to split it. We start the process by splitting by one character. So after the first step we will have three files:

a 
   apricot
   apple
   avocado 
   milk
   meat
o
   orange
Files m and o can now be sorted in memory. However file a is still too large. So we need to split further.

ap
   apricot
   apple
av
   avocado 
File av is less than ten bytes, however file ap is still large. So we split once again.

apr
   apricot
app
   apple
Now that we have five small sorted files instead of a single big one, we range them in order of their beginnings and merge them together, saving results into output file.

app apple
apr apricot
av avocado
m meat
milk
o orange
Looks good, however this algorithm has a flaw: consider that the input file contains five gigabytes of same line repeated many times. It's easy to see than in this case the algorithm will stuck in an endless loop trying to split this file over and over again. A similar problem is illustrated below. Consider we have the following strings and our memory is not sufficient to sort them.

a
ab
abc
abcd
abcde
abcdef

As they all start from 'a' they all will be copied into the same chunk on the first iteration. On the second iteration we are going to split line by first two characters, however line 'a' consists of only one character! We'll face the same situation on each iteration.

I handle these two problems by separating strings smaller than current substring length into a special unsorted file. As we don't need to sort it, the file can be of any size. (If only case-sensitive sorting were supported, it wouldn't be necessary to save short lines into a file, but only calculate their number.) 

Incidentally the algorithm is stable, i.e. it maintains the relative order of records with equal values.

Using the code 
The class HugeFileSort contains the following properties, which specify how sorting will be performed 
  • MaxFileSize - the maximum file size that can be sorted in-memory (100MB by default)
  • StringComparer - comparer used for sorting (case-sensitive CurrentCulture by default)
  • Encoding - the input file encoding (UTF8 by default)
The main method is called simply Sort. It accepts two strings: input and output file names. If size of input file is less than MaxFileSize its content is simply loaded into memory and being sorted. Otherwise the procedure described above is being performed.
Comments:

During the execution temporary directory tmp is created in the current folder. For the sake of better display final set of temporary files is not deleted and stays in the folder. In production code please uncomment two lines in the Merge method. Comments:

The core of the algorithm is the splitting method.
Comments:


FileChunk and ChunkInfo are auxiliary nested classes. The former is the helper that corresponds to new files on each iteration and is used to write lines into files. The latter contains information about the data that will be merged into the resultant file. During the recursive work of the algorithm the program populates the sorted dictionary that maps text line startings to the instances of ChunkInfo.
Comments:

ChunkInfo contains the following information:
  • FileName - the name of the file that contains sorted lines starting with the given substring
  • NoSortFileName - the name of the file that contains non-sorted lines equal to the given substring (may differ by case) 
Both properties can be null.
The class also contains method AddSmallString() that writes a given string into file NoSortFileName.
Test application requires three command-line arguments: input file name, output file name and max file size in bytes. It performs case-insensitive sorting of UTF8 files.


Limitations
If you want to break it pass in a large file with little or none line ends. As it uses standard TextReader to read text lines from the input file it loads it is not prone from such errors.

No comments:

Post a Comment