Skip to content

Commit

Permalink
Improve and optimize the endpoint reordering algorithm
Browse files Browse the repository at this point in the history
This change significantly improves the compression ratio and compression speed.

Explanation:
After the endpoint codebook has been determined, the endpoints can be reordered in order to improve the compression ratio. On the one hand, endpoint indices of the neighbor blocks should be similar, as the encoder compresses the deltas between those neighbour indices. On the other hand, the neighbor endpoints in the codebook should be also similar, as the encoder compresses the deltas between the color components of those neighbor endpoints. The optimization is based on the Zeng's technique, using a weighted function which takes into account both similarity of the endpoint indices for the neighbor blocks and similarity of the neighbor endpoints in the codebook.

The similarity of the endpoint indices is optimized using the combined neighborhood frequency of the candidate endpoint and all the currently selected endpoints in the list. The similarity of the neighbor endpoints in the codebook is optimized using euclidian distance from the candidate endpoint to the extremity of selected endpoints list. The original optimization function for the endpoint candidate (i) can be represented as:

F(i) = (total_neighborhood_frequency(i) + 1) * (endpoint_similarity(i) + 1)

The problem with this approach is the following. While the endpoint_similarity(i) has a limited range of values, the total_neighborhood_frequency(i) grows rapidly with the increasing size of the selected endpoints list. With each iteration this introduces additional disbalance for the weighted function. In order to minimize this effect, is it proposed to normalize the total_neighborhood_frequency(i) on each iteration. For computational simplicity, the normalizer is computed as the optimal total_neighborhood_frequency value from the previous iteration, multiplied by a constant. The modified optimization function can be represented as:

F(i) = (total_neighborhood_frequency(i) + total_neighborhood_frequency_normalizer) * (endpoint_similarity(i) + 1)

The main ideas used for endpoint reordering optimization:
- all the computations, which are common for the endpoint reordering threads, have been moved outside of the threads
- the ordering histogram offsets, which point to the neighborhood frequency values for a specific endpoint, are now cached, which reduces the number of multiplications when accessing the histogram
- floating point operations have been replaced with integer operations

Testing:
The modified algorithm has been tested on the Kodak test set using 64-bit build with default settings (running on Windows 10, i7-4790, 3.6GHz). All the decompressed test images are identical to the images being compressed and decompressed using original version of Crunch.

[Compressing Kodak set without mipmaps]
Original: 1582222 bytes / 28.873 sec
Modified: 1482726 bytes / 15.791 sec
Improvement: 6.29% (compression ratio) / 45.31% (compression time)

[Compressing Kodak set with mipmaps]
Original: 2065243 bytes / 36.925 sec
Modified: 1931475 bytes / 20.970 sec
Improvement: 6.48% (compression ratio) / 43.21% (compression time)
  • Loading branch information
alexander-suvorov committed Jun 9, 2017
1 parent 5822475 commit f1d6a5a
Show file tree
Hide file tree
Showing 7 changed files with 254 additions and 488 deletions.
Binary file modified bin/crunch_x64.exe
Binary file not shown.
Loading

0 comments on commit f1d6a5a

Please sign in to comment.