-
Notifications
You must be signed in to change notification settings - Fork 418
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mozjpeg vs. libjpeg-turbo for on-demand web asset recompression #176
Comments
Progressive JPEGs will be slower to decode than sequential ones. Other than that, decoding speed will dominated by resolution and file size. |
You can try yourself with
On my machine (3Ghz i7) mozjpeg-produced progressive file decodes at 72Mpx/s, and baseline (non-progressive) at 97Mpx/s. To put it in perspective it's 128ms and 95ms to decode a full-screen JPEG on a "4K" display. |
@nathanaeljones It depends on a variety of factors, but if you are creating an image resizing proxy, then compression performance must be of at least some importance. It may not be as important as decompression, but please note that the compression slow-down in mozjpeg vs. libjpeg-turbo is severe-- as much as 50x. mozjpeg is not, under any stretch of the imagination, going to compress images quickly-- its compression performance is in fact many times worse than the unaccelerated IJG libjpeg. Now, speaking specifically about decompression, I have observed that when an image is encoded using trellis quantization (using mozjpeg), that image does decompress more quickly than an image encoded without using trellis quantization (using libjpeg-turbo), but the tradeoff with compression performance is not very favorable. Some specific examples:
Conclusion: if your performance is primarily constrained by the CPU, then libjpeg-turbo will always produce the best results. If your performance is primarily constrained by the network, then there are some circumstances under which mozjpeg will produce better compression ratios, but only incrementally better, and at the expense of extremely high CPU usage. Also, it is worth noting that trellis quantization is not a lossless process, so in fact comparing results with it enabled and disabled is not a true apples-to-apples comparison. |
for a use case of resizing images for the web, mozjpeg sacrifices encoding performance but increases decoding performance...is my understanding correct? If so, this is exactly what I need. My main goal is decoding performance. |
@MrOutput yes, but as @dcommander pointed out the decoding performance is heavily affected by the choice of progressive vs baseline format. So choose one or another depending on whether you also care about download time. If you focus strictly on decoding performance, disregarding download time, then mozjpeg still improves the situation slightly, but for this case use non-progressive JPEG. For the web use case, where the total time to download and decode is important performance, then use MozJPEG with progressive (default), because the overall time is generally dominated by download time, and progressive further reduces amount of data to download. |
@pornel thank you, I will use MozJPEG and progressive format then. |
People keep trying to include libjpeg-turbo into downstream CMake-based build systems by way of the add_subdirectory() function and requesting upstream support when something inevitably breaks. (Refer to: mozilla#122, mozilla#173, mozilla#176, mozilla#202, mozilla#241, mozilla#349, mozilla#353, mozilla#412, #504, libjpeg-turbo/libjpeg-turbo@a3d4aad#commitcomment-67575889). libjpeg-turbo has never supported that method of sub-project integration, because doing so would require that we (minimally): 1. avoid using certain CMake variables, such as CMAKE_SOURCE_DIR, CMAKE_BINARY_DIR, and CMAKE_PROJECT_NAME; 2. avoid using implicit include directories and relative paths; 3. provide a way to optionally skip the installation of libjpeg-turbo components in response to 'make install'; 4. provide a way to optionally postfix target names, to avoid namespace conflicts; 5. restructure the top-level CMakeLists.txt so that it properly sets the PROJECT_VERSION variable; and 6. design automated CI tests to ensure that new commits don't break any of the above. Even if we did all of that, issues would still arise, because it is impossible for one upstream build system to anticipate the widely varying needs of every downstream build system. That's why the CMake ExternalProject_Add() function exists, and it is my sincere hope that adding a blurb to BUILDING.md mentioning the need to use that function will head off future GitHub issues on this topic. If not, then I can at least post a link to this commit and the blurb and avoid doing the same song and dance over and over again.
My use case is essentially an image resizing and re-compressing HTTP proxy.
Jpeg decompression speed matters a lot here, as input images can be orders of magnitude larger than output size. Jpeg compression speed isn't as important.
Does mozjpeg sacrifice decoding speed? If so, to what extent?
The text was updated successfully, but these errors were encountered: