Skip to content
Permalink
unity
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Commits on May 10, 2017

  1. Compute compressed endpoints size without pack simulation

    This change improves compression speed.
    
    Explanation:
    While trying different remappings for the endpoint indices, there is no need to perform full pack simulation when using Huffman coding. Once the delta index histogram is generated, it is sufficient to simply multiply the code sizes by the corresponding frequences in order to get the total size of the compressed endpoint indices stream. There is also no need to compute the rest of the compressed stream, as its size does not depend on the endpoint remapping and therefore is always constant, so it will not affect the size comparison during endpoint optimization.
    
    Testing:
    The modified algorithm has been tested on the Kodak test set using 64-bit build with default settings (running on Windows 10, i7-4790, 3.6GHz). All the decompressed test images are identical to the images being compressed and decompressed using original version of Crunch.
    
    [Compressing Kodak set without mipmaps]
    Original: 1582222 bytes / 28.864 sec
    Modified: 1494501 bytes / 25.317 sec
    Improvement: 5.54% (compression ratio) / 12.29% (compression time)
    
    [Compressing Kodak set with mipmaps]
    Original: 2065243 bytes / 36.927 sec
    Modified: 1945365 bytes / 33.151 sec
    Improvement: 5.80% (compression ratio) / 10.23% (compression time)
    Alexander Suvorov committed May 10, 2017

Commits on May 9, 2017

  1. Switch from chunk encoding to block encoding after quantization

    This change simplifies further modification of the code.
    
    Explanation:
    Considering that chunks are no longer used in the output format, it makes sense to also remove chunk related code from the intermediate processing. This modification also allows to use endpoint references from the leftmost block to the rightmost block in the previous scanline (wrapped reference to the left).
    
    Testing:
    The modified algorithm has been tested on the Kodak test set using 64-bit build with default settings (running on Windows 10, i7-4790, 3.6GHz). All the decompressed test images are identical to the images being compressed and decompressed using original version of Crunch.
    
    [Compressing Kodak set without mipmaps]
    Original: 1582222 bytes / 28.846 sec
    Modified: 1494501 bytes / 25.628 sec
    Improvement: 5.54% (compression ratio) / 11.16% (compression time)
    
    [Compressing Kodak set with mipmaps]
    Original: 2065243 bytes / 36.869 sec
    Modified: 1945365 bytes / 33.497 sec
    Improvement: 5.80% (compression ratio) / 9.15% (compression time)
    Alexander Suvorov committed May 9, 2017

Commits on May 5, 2017

  1. Remove duplicate endpoints and selectors from the codebooks

    This change significantly improves the compression ratio.
    
    Explanation:
    By default, the size of the endpoint and selector codebooks is calculated based on the number of blocks in the image and the quality parameter, while the actual complexity of the image does not affect the initial codebook size. So the target codebook size is selected in such a way, that even complex images can be approximated well enough. At the same time, normally, the lower is the complexity of the image, the higher is the density of the quantized vectors. Considering that vector quantization is performed using floating point computations, and the quantized endpoints have integer components, high density of quantized vectors will result in large number of duplicate endpoints. As the result, some identical endpoints are being represented by multiple different indices, which significantly affects the compression ratio. Note that this is not the case for selectors, as their corresponding vector components are rounded after quantization, but instead it leads to some duplicate selectors in the codebook being not used. In the modified version of the algorithm all the duplicate codebook entries are merged together, unused entries are removed from the codebooks, the endpoint and selector indices are updated accordingly.
    
    Testing:
    The modified algorithm has been tested on the Kodak test set using 64-bit build with default settings (running on Windows 10, i7-4790, 3.6GHz). All the decompressed test images are identical to the images being compressed and decompressed using original version of Crunch.
    
    [Compressing Kodak set without mipmaps]
    Original: 1582222 bytes / 28.835 sec
    Modified: 1494630 bytes / 25.637 sec
    Improvement: 5.54% (compression ratio) / 11.09% (compression time)
    
    [Compressing Kodak set with mipmaps]
    Original: 2065243 bytes / 36.875 sec
    Modified: 1946533 bytes / 33.546 sec
    Improvement: 5.75% (compression ratio) / 9.03% (compression time)
    Alexander Suvorov committed May 5, 2017
  2. Encode raw selector indices instead of selector indices deltas

    This change significantly improves compression ratio and compression speed.
    
    Explanation:
    The original version of Crunch encodes the differences between the neighbour indices in order to get advantage of the neighbour indices similarity. The efficiency of such approach highly depends on the continuity of the encoded data. While neighbour color and alpha endpoints are usualy similar, this is usually not the case for selectors. Of course, in some situations, encoding deltas for selector indices makes sense, for example, when the image contains a lot of regular patterns (except the special case of completely flat areas, where using selector deltas does not bring much advantage). In any case, such situations are relatively rare, so it usually appears to be more efficient to encode raw selector indices. Note that when not using deltas for selector indices, the remapping of the selector indices no longer affects the size of the encoded selector indices stream (at least when using Huffman coding). This makes the Zeng optimization step unnecessary, and it is sufficient to simply optimize the size of the packed selector codebook.
    
    Note:
    This modification alters the output file format and makes it incompatible with the previous revisions.
    
    Testing:
    The modified algorithm has been tested on the Kodak test set using 64-bit build with default settings (running on Windows 10, i7-4790, 3.6GHz). All the decompressed test images are identical to the images being compressed and decompressed using original version of Crunch.
    
    [Compressing Kodak set without mipmaps]
    Original: 1582222 bytes / 28.845 sec
    Modified: 1521167 bytes / 26.048 sec
    Improvement: 3.86% (compression ratio) / 9.70% (compression time)
    
    [Compressing Kodak set with mipmaps]
    Original: 2065243 bytes / 36.949 sec
    Modified: 1977373 bytes / 33.889 sec
    Improvement: 4.25% (compression ratio) / 8.28% (compression time)
    Alexander Suvorov committed May 5, 2017

Commits on May 4, 2017

  1. Switch from the chunk encoding concept to the reference encoding concept

    This change improves the compression ratio.
    
    Explanation:
    In the original version of Crunch all the blocks are grouped into chunks of 2x2 blocks. Each chunk can have one of 8 different types. The type of the chunk determines which blocks inside the chunk share the same endpoints (for example, all the blocks inside the chunk share the same endpoints, or blocks in the right column share the same endpoints, or all the blocks have different endpoints, etc.). Encoding of endpoints equality is usually cheaper than encoding of duplicate endpoint indices. The used 8 chunk types do not cover all the possibilities, but they can be efficiently encoded using 0.75 bits per block (uncompressed).
    
    The modified algorithm no longer uses the concept of chunks in the output file format and is based on an alternative approach. Endpoints for each block can be either copied from the left nearest block (reference to the left), copied from the upper nearest block (reference to the top), or decoded from the stream (reference to itself). Note that this is a superset of the original encoding, so all the images previously encoded with the original algorithm can be losslessly transcoded into the new format, but not vice versa. Even though the new endpoint equality encoding is more expensive (about 1.58 bits per block, uncompressed), it provides more flexibility for endpoint matching inside the former "chunks", and more importantly, it allows to inherit endpoints from outside the former "chunks" (which is not possible when using the original chunk encoding). The blocks are no longer grouped together and are encoded in the same order as they appear on the image.
    
    Note:
    This modification alters the output file format and makes it incompatible with the previous revisions.
    
    Testing:
    The modified algorithm has been tested on the Kodak test set using 64-bit build with default settings (running on Windows 10, i7-4790, 3.6GHz). All the decompressed test images are identical to the images being compressed and decompressed using original version of Crunch.
    
    [Compressing Kodak set without mipmaps]
    Original: 1582222 bytes / 28.903 sec
    Modified: 1548791 bytes / 28.818 sec
    Improvement: 2.11% (compression ratio) / 0.29% (compression time)
    
    [Compressing Kodak set with mipmaps]
    Original: 2065243 bytes / 36.978 sec
    Modified: 2017245 bytes / 36.846 sec
    Improvement: 2.32% (compression ratio) / 0.36% (compression time)
    Alexander Suvorov committed May 4, 2017

Commits on May 2, 2017

  1. Remove linear lists of endpoint and selector indices

    Explanation:
    After switching to ordering histograms, the linear lists of endpoint and selector indices are no longer used in Zeng function, and therefore can be removed.
    
    Testing:
    The modified algorithm has been tested on the Kodak test set using 64-bit build with default settings (running on Windows 10, i7-4790, 3.6GHz). All the decompressed test images are identical to the images being compressed and decompressed using original version of Crunch.
    
    [Compressing Kodak set without mipmaps]
    Original: 1582222 bytes / 28.872 sec
    Modified: 1561622 bytes / 28.434 sec
    Improvement: 1.30% (compression ratio) / 1.52% (compression time)
    
    [Compressing Kodak set with mipmaps]
    Original: 2065243 bytes / 36.910 sec
    Modified: 2033151 bytes / 36.369 sec
    Improvement: 1.55% (compression ratio) / 1.47% (compression time)
    Alexander Suvorov committed May 2, 2017

Commits on Apr 28, 2017

  1. Use left nearest block for selector index prediction

    This change improves compression ratio.
    
    Explanation:
    In the original algorithm the relative position of the block, used for prediction of the selector index for the currently decoded block, depends on the position of the current block in the chunk. It can be a horizontal neighbour or a diagonal neighbour. Using left nearest neighbour for selector index prediction for each block (except the blocks at the image borders) minimizes the average distance to the prediction block and therefore usually improves the selector index prediction. Similarly to the endpoint index processing, the selector ordering histogram in now generated based on the selector index prediction order.
    
    Note:
    This modification alters the output file format and makes it incompatible with the previous revisions.
    
    Testing:
    The modified algorithm has been tested on the Kodak test set using 64-bit build with default settings (running on Windows 10, i7-4790, 3.6GHz). All the decompressed test images are identical to the images being compressed and decompressed using original version of Crunch.
    
    [Compressing Kodak set without mipmaps]
    Original: 1582222 bytes / 28.869 sec
    Modified: 1561622 bytes / 28.522 sec
    Improvement: 1.30% (compression ratio) / 1.20% (compression time)
    
    [Compressing Kodak set with mipmaps]
    Original: 2065243 bytes / 37.038 sec
    Modified: 2033151 bytes / 36.407 sec
    Improvement: 1.55% (compression ratio) / 1.70% (compression time)
    Alexander Suvorov committed Apr 28, 2017
  2. Generate ordering histogram for endpoint indexes based on the predict…

    …ion order
    
    This change improves compression ratio.
    
    Explanation:
    The original histogram has been generated based on the linear order of encoded endpoint indexes. In the modified version of the algorithm, endpoint indexes are predicted using the nearest left block on the image, which is not necessarily the preceding block in the encoded sequence. Using the same block ordering both for prediction and Zeng optimization normally improves the compression ratio.
    
    Testing:
    The modified algorithm has been tested on the Kodak test set using 64-bit build with default settings (running on Windows 10, i7-4790, 3.6GHz). All the decompressed test images are identical to the images being compressed and decompressed using original version of Crunch.
    
    [Compressing Kodak set without mipmaps]
    Original: 1582222 bytes / 28.905 sec
    Modified: 1566133 bytes / 28.457 sec
    Improvement: 1.02% (compression ratio) / 1.55% (compression time)
    
    [Compressing Kodak set with mipmaps]
    Original: 2065243 bytes / 37.021 sec
    Modified: 2040086 bytes / 36.300 sec
    Improvement: 1.22% (compression ratio) / 1.95% (compression time)
    Alexander Suvorov committed Apr 28, 2017
  3. Prepare for encoding of endpoint and selector indexes in non-linear o…

    …rder
    
    This change makes the compression scheme more flexible.
    
    Explanation:
    In the original scheme, indexes are encoded in linear order, which means that each index uses the previously encoded index for prediction. However, more sophisticated schemes might require arbitrary references into the stream of already encoded indexes. For this reason, Zeng function has been modified to accept the ordering histogram as an input, instead of the linear array of indexes. Note that Zeng function itself does not rely on the indexes being encoded in linear order.
    
    Testing:
    The modified algorithm has been tested on the Kodak test set using 64-bit build with default settings (running on Windows 10, i7-4790, 3.6GHz). All the decompressed test images are identical to the images being compressed and decompressed using original version of Crunch.
    
    [Compressing Kodak set without mipmaps]
    Original: 1582222 bytes / 28.867 sec
    Modified: 1570534 bytes / 28.524 sec
    Improvement: 0.74% (compression ratio) / 1.19% (compression time)
    
    [Compressing Kodak set with mipmaps]
    Original: 2065243 bytes / 37.001 sec
    Modified: 2051509 bytes / 36.388 sec
    Improvement: 0.67% (compression ratio) / 1.66% (compression time)
    Alexander Suvorov committed Apr 28, 2017

Commits on Apr 27, 2017

  1. Use left nearest block for endpoint index prediction

    This change improves compression ratio.
    
    Explanation:
    In the original algorithm the relative position of the block, used for prediction of the endpoint index for the currently decoded block, depends on the chunk encoding type. It can be a horizontal neighbour, a vertical neighbour, a diagonal neighbour, or in some rare cases even a block at relative position (-2, 0) or (-3, 0). Using left nearest neighbour for endpoint index prediction for each block (except the blocks at the image borders) minimizes the average distance to the prediction block and therefore usually improves the endpoint index prediction.
    
    Note:
    This modification alters the output file format and makes it incompatible with the previous revisions.
    
    Testing:
    The modified algorithm has been tested on the Kodak test set using 64-bit build with default settings (running on Windows 10, i7-4790, 3.6GHz). All the decompressed test images are identical to the images being compressed and decompressed using original version of Crunch.
    
    [Compressing Kodak set without mipmaps]
    Original: 1582222 bytes / 28.838 sec
    Modified: 1570534 bytes / 28.629 sec
    Improvement: 0.74% (compression ratio) / 0.72% (compression time)
    
    [Compressing Kodak set with mipmaps]
    Original: 2065243 bytes / 36.977 sec
    Modified: 2051509 bytes / 36.568 sec
    Improvement: 0.67% (compression ratio) / 1.11% (compression time)
    Alexander Suvorov committed Apr 27, 2017
  2. Reorder chunks in each scanline in the left-to-right manner

    This change slightly improves compression ratio and compression time.
    
    Explanation:
    The efficiency of the Crunch encoding scheme depends on the similarity between the neighbour chunks. For this reason in original version of Crunch the order of chunks is reversed after each scanline, so that there is no jump from one side of the image to another at the image borders. The problem here is that inside of each chunk, the blocks are normally ordered in a usual up-to-down-left-to-right manner, regardless of the chunk scanning order. While on the forward scan we normally need to perform diagonal jumps (+1, +1) in order to get to the next chunk, on the reverse scan we normally need to perform much larger (-3, +1) jumps, which usually defeats the advantage of not having discontinuity at the image borders.
    
    Note:
    This modification alters the output format and makes it incompatible with the previous revisions.
    
    Testing:
    The modified algorithm has been tested on the Kodak test set using 64-bit build with default settings (running on Windows 10, i7-4790, 3.6GHz). All the decompressed test images are identical to the images being compressed and decompressed using original version of Crunch.
    
    [Compressing Kodak set without mipmaps]
    Original: 1582222 bytes / 28.882 sec
    Modified: 1579618 bytes / 28.743 sec
    Improvement: 0.16% (compression ratio) / 0.48% (compression time)
    
    [Compressing Kodak set with mipmaps]
    Original: 2065243 bytes / 36.920 sec
    Modified: 2061499 bytes / 36.833 sec
    Improvement: 0.18% (compression ratio) / 0.24% (compression time)
    Alexander Suvorov committed Apr 27, 2017

Commits on Apr 26, 2017

  1. Update .gitignore

    Alexander Suvorov committed Apr 26, 2017
  2. Remove big endian support, write barriers, byte streams and dxt1 deco…

    …ding optimization from the decompression code
    
    This change makes the code more simple to modify. The removed functionality might be reintroduced in the future if necessary.
    Alexander Suvorov committed Apr 26, 2017
  3. Split the header block from the crn_decomp.h into a separate crn_defs…

    ….h file. This change makes the used CRND_HEADER_FILE_ONLY macro unneccesary.
    Alexander Suvorov committed Apr 26, 2017
  4. Reformat the source files. The source files have been reformatted usi…

    …ng:
    clang-format.exe -style="{BasedOnStyle: Google, AllowAllParametersOfDeclarationOnNextLine: false, AllowShortFunctionsOnASingleLine: Inline, AllowShortIfStatementsOnASingleLine: false, AllowShortLoopsOnASingleLine: false, ColumnLimit: 0, DerivePointerAlignment: false, SortIncludes: false}"
    Alexander Suvorov committed Apr 26, 2017
  5. Update solution to use Visual C++ 2010 compiler and libraries. When c…

    …ompiled with Visual Studio 2010, the code will produce the same results as the originally distributed Crunch binaries.
    Alexander Suvorov committed Apr 26, 2017

Commits on Nov 15, 2016

  1. readme: Update link to Compressonator

    Signed-off-by: Adam Jackson <ajax@redhat.com>
    nwnk committed Nov 15, 2016