Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question does oxipng optimization slow loading? #487

Open
Pop000100 opened this issue Mar 2, 2023 · 14 comments
Open

Question does oxipng optimization slow loading? #487

Pop000100 opened this issue Mar 2, 2023 · 14 comments

Comments

@Pop000100
Copy link

Pop000100 commented Mar 2, 2023

Does a strong compression on a png slow it down?
Didn't know where to ask.

@andrews05
Copy link
Collaborator

Good question. I don't really know the answer but my gut feeling would be no, not tangibly.

According to pngcrush the Average and Paeth filters are slower to decode, and it offers a filter type 6 which avoids use of them. This does sound plausible in theory, but I imagine the difference would be insignificant with a modern, SIMD-optimised decoder (e.g. see recent performance improvements to the Rust png crate).

@Oondanomala
Copy link

It should also be noted that while decompression may be slower it will probably still overall be faster as there's less data to be read from disk

@Armavica
Copy link

Armavica commented Jul 1, 2023

I was asking myself the same question, and decided to try.

I reduced an image dataset (about 1000 images with mixed image depths) from 379 MiB to 162 MiB with oxipng -omax, and I benchmarked reading the images with the Python openCV bindings. The optimization took under 3 minutes. The original dataset was taking 12.1 s to load, and after optimization only 5.1 s (with a warm file cache every time).

The comparaison is not entirely fair, because a few of the images were originally stored as tiff, and I converted them losslessly to png in order to be able to optimize them. But still after all of this, my dataset loads in under half the time. So I think that I am completely reassured about the decompression time.

@TPS
Copy link

TPS commented Jul 1, 2023

@Armavica A bit off-topic, but have you tried the same w/ other, more modern image formats (e.g., WebP &/or JxL) in their lossless modes? I, for 1, would be curious about such comparative results.

@Armavica
Copy link

Armavica commented Jul 5, 2023

@TPS I can try but do you know how to losslessly convert files into webp (on linux)? I tried convert img.png -quality 100 -define webp:lossless=true img.webp but this doesn't preserve pixel values.

@TPS
Copy link

TPS commented Jul 5, 2023

@Armavica

  • For .webp, cwebp -lossless -exact -alpha_filter best should do, though -exact is only needed if you desperately care about the color values of 100% transparent pixels
  • For .jxl, cjxl -d 0 -e 9 -m=1 should do, though it could be -d 1 instead if you don't care about invisible pixel values (as w/ cwebp), & should try trials w/o -m=1 in case the VarDCT algo is better losslessly than the Modular in rare cases

@Armavica
Copy link

Armavica commented Jul 6, 2023

@TPS
So, after some investigation, the problem of webp is that it doesn't support 16-bit images, so it doesn't work for my application (scientific images). As for jxl, it seems to support 16-bit images but I can't find a setting that effectively conserves pixel values. I tried both -d0 ("mathematically lossless") and -d1 ("visually lossless"), in the first case the size is 20% smaller than with oxipng, and in the second the size is about 90% smaller, but in neither case are the images losslessly reconstructed, when I reload and compare them with Python (it is possible that I am doing something wrong, though). I don't think that I have more time to spend on this unfortunately…

@TPS
Copy link

TPS commented Jul 7, 2023

@Armavica Thanks for the time & feedback 🙇🏾‍♂️

@Maingron
Copy link

I've always wondered if it's the same for images as with file archives - The heavier the compression, the longer we need to decompress (or compress). So if we have a very compressed image in comparison to a less compressed image, could we eventually see diminishing returns?
I mean its less data to work with, but said data is also much harder to work with, I imagine.

As far as I am aware, this goes for decompression in file-archives, videos, and others.
Regarding the compression, I've never seen anything else than more = heavier, more more = way heavier (and less improvements).

Assuming we don't get diminishing returns, my next question as a web developer would be: How many times does an image have to be served until we're carbon-positive with what we've done in comparison to not optimizing an image?

@Maingron
Copy link

Actually there is yet another question I've asked myself quite often: If I compress game resources, will anything about the game get slower?
More specific - If I compress a Minecraft texture pack, is there ANY downside or will it literally just save some disk space? It's hard to believe nobody compresses their textures for no reason.

@AlexTMjugador
Copy link
Collaborator

AlexTMjugador commented Oct 15, 2023

I've always wondered if it's the same for images as with file archives - The heavier the compression, the longer we need to decompress (or compress). So if we have a very compressed image in comparison to a less compressed image, could we eventually see diminishing returns?

For any given data in any given compressed format, by definition of the compressed format, the decompression algorithm and its parameters must be known in advance: otherwise a program would not be able to make sense of such data. Thus, if two compressors produce different files but keep the decompression algorithm and parameters constant (as is the case with PNGs), the expected result is that the smaller file will be faster to decompress, since the constant decompression algorithm has less data to work with in the first place, and no general-purpose decompression algorithm can have an asymptotic complexity better than O(n).

However, practical compression formats give a lot of leeway to compressors to choose different approaches to encode the same data, and it may happen that a compressor that produces a smaller file does so at the cost of using less optimized, less tested in a given decompressor, or fundamentally more computationally expensive coding features of a compression format. In the case of DEFLATE, which is used for PNG IDAT compression, any compressor that gives usable compression ratios in the general case almost always uses dynamic prefix code blocks, which require the decompressor to implement the full LZ77 and prefix code decoding logic anyway, so I'd expect file size to predict most of the variance in decompression time.

In simpler terms, decompressing data is not necessarily a task that requires symmetric processing power with respect to compressing it: in general, there is no reason to think that if it takes longer to compress some data, it must also take longer to decompress it. Zopfli is an extreme example of what happens when you ignore compression time in exchange for space savings and drop-in compatibility with existing DEFLATE decompressors. This is somewhat analogous to how optimizing compilers work: they take longer to build a given piece of code, but that code runs faster than code built with non-optimizing compilers.

How many times does an image have to be served until we're carbon-positive with what we've done in comparison to not optimizing an image?

The answer to this question depends heavily on what estimates you use for the power consumption and production costs of generating and moving bytes of data between computers.

But, according to a random paper, the global average CO2 emissions to produce 1 kWh of electricity is 420 g, and a pessimistic figure for the power consumption to transfer 1 GiB is 7.1 kWh. The latest OxiPNG benchmark shows that a current Intel CPU with a TDP of 65 W can optimize 98.5 KiB of PNG data in ~90 ms, which equates to ~0.00104 GiB/s, and that CPU consumes an average of ~30 W during this benchmark. Optimizing 1 GiB of PNGs would take ~958.09 s = ~0.266 h, which, assuming a total power consumption of ~150 W (to account for other devices, PSU inefficiencies... as a very rough approximation) comes out to ~39.92 Wh = ~0.03992 kWh. If we further assume that OxiPNG manages to reduce the dataset size from 1 GiB to 768 MiB (a ~25% reduction), this would save 1.775 kWh in transfers, which is equivalent to ~45 OxiPNG runs on the original data, and highlights that running OxiPNG is always beneficial from a carbon footprint standpoint for any data that is transferred more than once.

The previous numbers are of course very rough approximations, and the precise conclusion may change depending on the specifics of the situation, but overall I'm confident that in practical circumstances transferring bytes repeatedly is much more expensive than generating them once. Cloud providers like Google and Cloudflare invest heavily in better compression techniques for good reasons.

If I compress game resources, will anything about the game get slower? More specific - If I compress a Minecraft texture pack, is there ANY downside or will it literally just save some disk space? It's hard to believe nobody compresses their textures for no reason.

This also depends on the specifics of the game, but Minecraft in particular decompresses all resource pack assets into memory before loading them, so there is no difference in runtime performance after the initial load depending on the compression algorithm you choose. I think the main reason why not many people optimize their packs is that they don't care, know better, or are not patient enough to spend the computer time in better compression themselves 😉

@Maingron
Copy link

Ok wow, that's quite an impressive answer I'd have never expected to get from anyone. Thanks for explaining that all! @AlexTMjugador

Also, if we're rude and assume all your numbers are heavily biased towards Oxipng by orders of magnitude, we'd still be carbon-positive pretty quickly. That's been blowing my mind pretty much the whole day, for some reason.

I think it might be beneficial to have a section in the Readme file documenting this. Maybe someone out there is still thinking about if it's worth to use Oxipng.

@jonnyawsom3
Copy link

  • For .webp, cwebp -lossless -exact -alpha_filter best should do, though -exact is only needed if you desperately care about the color values of 100% transparent pixels
  • For .jxl, cjxl -d 0 -e 9 -m=1 should do, though it could be -d 1 instead if you don't care about invisible pixel values (as w/ cwebp), & should try trials w/o -m=1 in case the VarDCT algo is better losslessly than the Modular in rare cases

@TPS I'm very late but I think you're mistaken about the JXL arguments, VarDCT doesn't support lossless and -d 1 is lossy.

So, after some investigation, the problem of webp is that it doesn't support 16-bit images, so it doesn't work for my application (scientific images). As for jxl, it seems to support 16-bit images but I can't find a setting that effectively conserves pixel values. I tried both -d0 ("mathematically lossless") and -d1 ("visually lossless"), in the first case the size is 20% smaller than with oxipng, and in the second the size is about 90% smaller, but in neither case are the images losslessly reconstructed, when I reload and compare them with Python (it is possible that I am doing something wrong, though). I don't think that I have more time to spend on this unfortunately…

@Armavica I've had issues getting WebP lossless even on 8 bit images for whatever reason, although JXL not preserving pixels with -d 0 sounds like a bug. If you have some time then it might be worth opening an issue on libjxl or leaving a message on the Discord server

@TPS
Copy link

TPS commented Nov 1, 2023

@TPS I'm very late but I think you're mistaken about the JXL arguments, VarDCT doesn't support lossless and -d 1 is lossy.

@jonnyawsom3 Per https://github.com/libjxl/libjxl/blob/main/doc/format_overview.md#pixel-data, you're correct. I think I was mistaking lossy Modular the wrong way around.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants