Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resulting in larger files #30

Closed
GoogleCodeExporter opened this issue Mar 9, 2015 · 3 comments
Closed

Resulting in larger files #30

GoogleCodeExporter opened this issue Mar 9, 2015 · 3 comments

Comments

@GoogleCodeExporter
Copy link

I am making use of a project called AdvanceCOMP, in it's compression options 
allows zopfli (-4, or --shrink-insane). While going through PNG's for 
Torchlight there are dozens of png's that end up larger. I've attached the 
gloves so you can review and test it yourself as appropriate.

advpng.exe -z -4 */*.png

43653       43653 100% wardrobe/dragon_gloves.png (Bigger 45713)
89125       89125 100% wardrobe/heavyleather_boots.png (Bigger 96720)
11412       10817  94% wardrobe/heavyleather_boots_alt01.png
90564       90564 100% wardrobe/heavyleather_chest.png (Bigger 98420)
89328       89328 100% wardrobe/heavyleather_gloves.png (Bigger 97619)

At worst the compression should max at the same size; This means the problem's 
root comes from some sections of the data are compressing better while others 
end up being left as uncompressed. I noted this in some of my own compression 
experiments years ago.


Viable solution:
 With sections that have no compression on them (causing expansion) they should instead compress and find an identical matching length. This is most likely part of another match. If this at the beginning or end of another compressed section, that section will truncate to allow the uncompressed section to compress at a 1:1 rate (so long as the other match remains long enough to retain compression).

I am not aware of the full details of zlib compression, so there needs to be an 
additional rule.

If in the instance that a match is found in the middle of another match, then 
it should split the match into two matches avoiding the middle (on purpose) to 
give the inner match to the uncompressed section. This should only happen in 
the case that these 3 matches takes less space than 1 match & 1 non-match.

How much more complexity this will give I'm not sure, nor how much extra time 
it will take.

Original issue reported on code.google.com by rtcv...@yahoo.com on 2 Nov 2013 at 9:00

Attachments:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant