Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mozjpeg vs. libjpeg-turbo for on-demand web asset recompression #176

Closed
lilith opened this issue May 19, 2015 · 6 comments
Closed

mozjpeg vs. libjpeg-turbo for on-demand web asset recompression #176

lilith opened this issue May 19, 2015 · 6 comments

Comments

@lilith
Copy link

lilith commented May 19, 2015

My use case is essentially an image resizing and re-compressing HTTP proxy.

Jpeg decompression speed matters a lot here, as input images can be orders of magnitude larger than output size. Jpeg compression speed isn't as important.

Does mozjpeg sacrifice decoding speed? If so, to what extent?

@jrmuizel
Copy link

Progressive JPEGs will be slower to decode than sequential ones. Other than that, decoding speed will dominated by resolution and file size.

@kornelski
Copy link
Member

You can try yourself with tjbench bundled with mozjpeg/libjpeg-turbo:

 tjbench file.jpg

On my machine (3Ghz i7) mozjpeg-produced progressive file decodes at 72Mpx/s, and baseline (non-progressive) at 97Mpx/s.

To put it in perspective it's 128ms and 95ms to decode a full-screen JPEG on a "4K" display.

@dcommander
Copy link
Contributor

@nathanaeljones It depends on a variety of factors, but if you are creating an image resizing proxy, then compression performance must be of at least some importance. It may not be as important as decompression, but please note that the compression slow-down in mozjpeg vs. libjpeg-turbo is severe-- as much as 50x. mozjpeg is not, under any stretch of the imagination, going to compress images quickly-- its compression performance is in fact many times worse than the unaccelerated IJG libjpeg.

Now, speaking specifically about decompression, I have observed that when an image is encoded using trellis quantization (using mozjpeg), that image does decompress more quickly than an image encoded without using trellis quantization (using libjpeg-turbo), but the tradeoff with compression performance is not very favorable. Some specific examples:

  • If you start with a baseline JPEG image, adding multi-pass (optimized) entropy coding (supported by both libjpeg-turbo and mozjpeg) will improve the compression ratio by ~1-13% (average ~5%) on our test images (http://www.libjpeg-turbo.org/About/Performance), and decompression performance will be improved by ~2-11% (average ~6%), but compression performance will drop by about half.
  • If you start with a baseline JPEG image, adding trellis quantization (supported only by mozjpeg) will improve the compression ratio by ~10-18% (average ~13%), and decompression performance will be improved by ~4-12% (average ~9%), but compression performance will drop by ~15-40x (average ~25x.) So you're sacrificing as much as 98% of your compression performance for only a few percent more decompression performance.
  • If you start with a baseline JPEG image, switching to progressive entropy encoding with multi-pass optimization (supported by both mozjpeg and libjpeg-turbo) will improve the compression ratio by ~5-20% (average ~11%), but decompression performance will be reduced by ~65%, and compression performance will be reduced by ~90%.
  • If you start with a baseline JPEG image, then switching to progressive entropy encoding with multi-pass optimization and adding trellis quantization (supported only by mozjpeg) will improve the compression ratio by ~15-25% (average ~20%), but decompression performance will be reduced by ~15-70% (average ~50%), and compression performance will be reduced by ~98% (50x.)

Conclusion: if your performance is primarily constrained by the CPU, then libjpeg-turbo will always produce the best results. If your performance is primarily constrained by the network, then there are some circumstances under which mozjpeg will produce better compression ratios, but only incrementally better, and at the expense of extremely high CPU usage. Also, it is worth noting that trellis quantization is not a lossless process, so in fact comparing results with it enabled and disabled is not a true apples-to-apples comparison.

@MrOutput
Copy link

for a use case of resizing images for the web, mozjpeg sacrifices encoding performance but increases decoding performance...is my understanding correct? If so, this is exactly what I need.

My main goal is decoding performance.

@kornelski
Copy link
Member

@MrOutput yes, but as @dcommander pointed out the decoding performance is heavily affected by the choice of progressive vs baseline format. So choose one or another depending on whether you also care about download time.

If you focus strictly on decoding performance, disregarding download time, then mozjpeg still improves the situation slightly, but for this case use non-progressive JPEG.

For the web use case, where the total time to download and decode is important performance, then use MozJPEG with progressive (default), because the overall time is generally dominated by download time, and progressive further reduces amount of data to download.

@MrOutput
Copy link

@pornel thank you, I will use MozJPEG and progressive format then.

pull bot pushed a commit to admariner/mozjpeg that referenced this issue Aug 15, 2022
People keep trying to include libjpeg-turbo into downstream CMake-based
build systems by way of the add_subdirectory() function and requesting
upstream support when something inevitably breaks.

(Refer to: mozilla#122, mozilla#173, mozilla#176, mozilla#202, mozilla#241, mozilla#349, mozilla#353, mozilla#412, #504,
libjpeg-turbo/libjpeg-turbo@a3d4aad#commitcomment-67575889).

libjpeg-turbo has never supported that method of sub-project
integration, because doing so would require that we (minimally):

1. avoid using certain CMake variables, such as CMAKE_SOURCE_DIR,
   CMAKE_BINARY_DIR, and CMAKE_PROJECT_NAME;
2. avoid using implicit include directories and relative paths;
3. provide a way to optionally skip the installation of libjpeg-turbo
   components in response to 'make install';
4. provide a way to optionally postfix target names, to avoid namespace
   conflicts;
5. restructure the top-level CMakeLists.txt so that it properly sets
   the PROJECT_VERSION variable; and
6. design automated CI tests to ensure that new commits don't break
   any of the above.

Even if we did all of that, issues would still arise, because it is
impossible for one upstream build system to anticipate the widely
varying needs of every downstream build system.  That's why the CMake
ExternalProject_Add() function exists, and it is my sincere hope that
adding a blurb to BUILDING.md mentioning the need to use that function
will head off future GitHub issues on this topic.  If not, then I can at
least post a link to this commit and the blurb and avoid doing the same
song and dance over and over again.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants