Text File Compression And Decompression Using Huffman Coding
How Does The Process Of Compression Work?
The size of the text file can be reduced by compressing it, which converts the text to a smaller format that takes up less space. It typically works by locating similar strings/characters within a text file and replacing them with a temporary binary representation to reduce the overall file size. There are two types of file compression,
- Lossy compression: Lossy compression shrinks a file by permanently removing certain elements, particularly redundant elements.
- Lossless compression: Lossless compression can restore all elements of a file during decompression without sacrificing data and quality.
Text encoding is also of two types:
- Fixed length encoding and
- Variable length encoding.
The two methods differ in the length of the codes. Analysis shows that variable-length encoding is much better than fixed-length encoding. Characters in variable-length encoding are assigned a variable number of bits based on their frequency in the given text. As a result, some characters may require a single bit, while others may require two bits, while still others may require three bits, and so on.
How to retain uniqueness of compressed text?
During the encoding process in compression, every character can be assigned and represented by a variable-length binary code. But, the problem with this approach is its decoding. At some point during the decoding process, two or more characters may have the same prefix of code, causing the algorithm to become confused. Hence, the “prefix rule” is used which makes sure that the algorithm only generates uniquely decodable codes. In this way, none of the codes are prefixed to the other and hence the uncertainty can be resolved.
Hence, for text file compression in this article, we decide to leverage an algorithm that gives lossless compression and uses variable-length encoding with prefix rule. The article also focuses on regenerating the original file using the decoding process.
Compressing a Text File:
We use the Huffman Coding algorithm for this purpose which is a greedy algorithm that assigns variable length binary codes for each input character in the text file. The length of the binary code depends on the frequency of the character in the file. The algorithm suggests creating a binary tree where all the unique characters of a file are stored in the tree’s leaf nodes.
- The algorithm works by first determining all of the file’s unique characters and their frequencies.
- The characters and frequencies are then added to a Min-heap.
- It then extracts two minimum frequency characters and adds them as nodes to a dummy root.
- The value of this dummy root is the combined frequency of its nodes and this root node is added back to the Min-heap.
- The procedure is then repeated until there is only one element left in the Min-heap.
This way, a Huffman tree for a particular text file can be created.
Steps to build Huffman Tree:
- The input to the algorithm is the array of characters in the text file.
- The frequency of occurrences of each character in the file is calculated.
- Struct array is created where each element includes the character along with their frequencies. They are stored in a priority queue (min-heap), where the elements are compared using their frequencies.
- To build the Huffman tree, two elements with minimum frequency are extracted from the min-heap.
- The two nodes are added to the tree as left and right children to a new root node which contains the frequency equal to the sum of two frequencies. A lower frequency character is added to the left child node and the higher frequency character into the right child node.
- The root node is then again added back to the priority queue.
- Repeat from step 4 until there is only one element left in the priority queue.
- Finally, the tree’s left and right edges are numbered 0 and 1, respectively. For each leaf node, the entire tree is traversed, and the corresponding 1 and 0 are appended to their code until a leaf node is encountered.
- Once we have the unique codes for each unique character in the text, we can replace the text characters with their codes. These codes will be stored in bit-by-bit form, which will take up less space than text.
Algorithm explained with an example:
The above pictorial representation clearly demonstrates the complete Huffman coding algorithm for the text = “Stressed-desserts”.
Size of a file with this text = 17*1 = 17 bytes
Size of an encoded file = 1*S + 1*- + 2*d + 4*e + 2*r + 5*s + 2*t = 1*4 + 1*4 + 2*3 + 4*2 + 2*3 + 5*2 + 2*3 = 44 bits = 44/8 bytes = 5.5 bytes
Compressed File Structure:
We’ve talked about variable length input code generation and replacing it with the file’s original characters so far. However, this only serves to compress the file. The more difficult task is to decompress the file by decoding the binary codes to their original value.
This would necessitate the addition of some additional information to our compressed file in order to use it during the decoding process. As a result, we include the characters in our file, along with their corresponding codes. During the decoding process, this aids in the recreation of the Huffman tree.
The structure of a compressed file –
|Number of unique characters in the input file|
|Total number of characters in the input file|
|All characters with their binary codes (To be used for decoding)|
|Storing binary codes by replacing the characters of the input file one by one|
Decompressing the Compressed File:
- The compressed file is opened, and the number of unique characters and the total number of characters in the file are retrieved.
- The characters and their binary codes are then read from the file. We can recreate the Huffman tree using this.
- For each binary code:
- A left edge is created for 0, and a right edge is created for 1.
- Finally, a leaf node is formed and the character is stored within it.
- This is repeated for all characters and binary codes. The Huffman tree is thus recreated in this manner.
- The remaining file is now read bit by bit, and the corresponding 0/1 bit in the tree is traversed. The corresponding character is written into the decompressed file as soon as a leaf node is encountered in the tree.
- Step 4 is repeated until the compressed file has been read completely.
In this manner, we recover all of the characters from our input file into a newly decompressed file with no data or quality loss.
Following the steps above, we can compress a text file and then overcome the bigger task of decompressing the file to its original content without any data loss.
Time Complexity: O(N * logN) where N is the number of unique characters as an efficient priority queue data structure takes O(logN) time per insertion, a complete binary tree with N leaves has (2*N – 1) nodes.
Implementation using C Language:
Opening Input/Output Files:
Function to Initialize and Create Min Heap:
Function to Build and Create a Huffman Tree:
Recursive Function to Print Binary Codes into Compressed File:
Function to Compress the File by Substituting Characters with their Huffman Codes:
Function to Build Huffman Tree from Data Extracted from Compressed File:
Function to Decompress the Compressed File:
When the snippets of code above are combined to form a full implementation of the algorithm and a large corpus of data is passed to it, the following results can be obtained. It clearly demonstrates how a text file can be compressed with a ratio greater than 50% (typically 40-45%) and then decompressed without losing a single byte of data.