-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Look at Density or other new secondary compression options #13
Comments
+1 for this. i was going to suggest (it's what we use for ofxSquashBuddies generally) |
There were recently a bunch of commits and releases on density, also besides of lots of other issues the one with the memory leak got closed. |
the biggest issue is that this would break backwards compatibility. So would be best placed for an entirely new HAP codec variant, rather than as an incremental update to existing codec variants.
|
yeah, I see, that this would be somewhat of an issue. |
This also might worth investigating: https://github.com/Blosc/c-blosc |
Changing the lossless compression algo is a good idea, will need to see if they are fitted for the DXT data feeded to them. Side question : what about turning those algo into lossy compression ? i.e. the algo could detect small changes in the input that would perform better compression. Just curious about the artifacts it could generate |
Also, is the CPU -> GPU memory transfer a bottleneck in general ? That would mean that a decompression algo that could run on the GPU would increase playback performance |
Squash is a compression abstraction library. Implementing squash means you can easily switch between underlying compression codecs (e.g. density, snappy, etc). also in terms of lossy encoding : HAP already uses a DXT texture compression (lossy) which happens in addition to the Snappy compression (lossless). If you want to have a lossy compression algorithm which is directly GPU decoded, then you will probably end up with something similar to h264/h265, which has widespread GPU support for decoding up to 8k resolutions already. |
It's missing a dataset with DXT textures :p
In theory no, HAP without Snappy fits into your definition and a lot of others codecs. We could even create our own codec that is decodable with a custom compute shader. But good idea to look into H264 and compare it to HAP : how many playback can you have at the same time in a real time environment ? I think this company is trying to do similar stuff than HAP : http://www.binomial.info/ |
Btw I've just took at look at above benchmarks and it looks like density have lower decompression speed (which is most crucial for Hap case I guess) compared to snappy (40Mb/s vs 110Mb/s) in some cases at least - I've chosen the X-ray medical picture dataset. |
Also, lz4 looks quite promising |
I've made a really quick prototype of Hap with different 2nd stage compressors couple of weeks ago, and I will just post it here, so everyone can work on it further
|
The code snippet above shows the list of new compressors I've tried |
I've used different DXT1/DXT5 DDS files to estimate the possible speed impovement over snappy. Initially I've though that the performance gain could be 2-3x. Though I did get only roughly 20-30% improvement when used LZ4/Lizard inside 8k*8k Hap video (no chunks), and when many threaded chunks are used it is even smaller. The good news is that Lizard/LZ4HC also tend to give 20-30% smaller file size, while having similar or higher decompression speed compared to Snappy. My current guess is that new compressors work much faster with bigger input, though I couldn't fully confirm that with lzbench results yet (didn't had much time for that, but still). It looks like I'm missing something there. |
Density promises improved performance over Snappy.
The text was updated successfully, but these errors were encountered: