-
Notifications
You must be signed in to change notification settings - Fork 979
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comparison with mozjpeg? #10
Comments
We will soon publish a more detailed comparison that takes mozjpeg and libjpeg into account. Note that comparing two JPEG encoders is more complicated than it might at first appear, because one needs to compare quality of the compressed image. Obviously, we actually care about quality as perceived by humans, but human evaluation takes time, is costly, and is hard to do correctly, so we often make do with proxies. Traditional metrics (SSIM, PSNR-HVS) are commonly used as proxies; Butteraugli is designed to be such a proxy. These proxies can and do disagree with each other. |
How soon is soon? |
By the way, I am almost sure that it if so happens that mozjpeg is more efficient, the comparison won't be published because mozjpeg is Mozilla's project and this is Google™ and Google™ doesn't like to admit failures after scratching their https://en.wikipedia.org/wiki/Not_invented_here syndrome. btw, mozjpeg is also ABI-compatible with libjpeg-turbo, this is a valuable thing, and if I understand it right, guetzli isn't |
We are still preparing the publication. Other people have run some tests in the mean time. Guetzli is both better and worse than Mozjpeg. https://twitter.com/fg118942/status/820984186974584832 shows an interesting reversal of codecs. Guetzli is the worst on SSIM and PSNR, but best on butteraugli scoring. This is really not a miracle nor a proof of its superiority since guetzli is just a complex optimizer for butteraugli rating. Ordering of codecs with ssim:
Ordering of codecs with butteraugli:
Note, that butteraugli only looks at the worst area of the image, where as ssim and psnr aggregate errors from everywhere in the image. I have read that human raters like JPEGs more than similarly sized JPEG 2000 and JPEG XR, but SSIM and PSNR consistently rank them the opposite as humans tend to do. |
@magicgoose I'd like to see a rigorous comparison too, but … do you really think the tone of that message is helpful? I don't know what the mood is like at Google but I'm sure it's not helped by unnecessarily politicizing this issue. |
A little bit of trolling is always good, that's sort of my motto. I'm pleased to see that this was unnecessary and these conspiracy theories are not true. 😄 |
With some simple convert actions on the console comparing mozjpeg, libjpeg and guetzli with your bees.png sample image, I can't perceive any significant differences in matter of filesize.
-rw-rw-r-- 1 maik maik 20316 6. Mär 13:07 bees-guetzli-85.png.jpg
-rw-rw-r-- 1 maik maik 20128 6. Mär 13:09 bees-libjpeg-85.png.jpg
-rw-rw-r-- 1 maik maik 20359 6. Mär 13:06 bees-mozjpeg-85.png.jpg
-rw-rw-r-- 1 maik maik 177424 24. Okt 17:11 bees.png Wich comparison method do I have to use to get the 20-30% "typically 20-30% smaller" images with guetzli. Do I have to use it with butteraugli anyway? |
Thanks for running these tests on guetzli. This is very helpful. We have done two different comparisons. One where we keep the filesize the same between guetzli and libjpeg and get humans to look at the pictures, and another where we keep the butteraugli-quality the same and we look at filesize. The first shows a 75-80 % preference to guetzli generated images over the libjpeg images. The latter shows 35 % filesize reduction from libjpeg, and possibly around 30 % from mozjpeg. We didn't do a full human rater study between guetzli and mozjpeg, but a few samples indicated that mozjpeg is closer to libjpeg than guetzli in human viewing. Our comparisons are done at around qualities 90 to 95. We used a calibrated high quality monitor intended for photo editing work. Differences at these qualities are subtle. I think the bees image is one that doesn't suffer a lot from YUV420, and guetzli's aversion to YUV420 might explain why that image works well with convert and mozjpeg. If you try YUV420 with an image where there are red things next to green things, you can observe what is happening in more detail. |
Not sure why you are pointing at YUV420, mozjpeg can encode full resolution chroma as well. |
@magicgoose Note that mozjpeg encodes with YUV420 by default. When you ask mozjpeg to encode with YUV444, the produced image is ~45% larger:
|
@robryk I think it's YUV420 by default when quality is less than some number… also, from my testing, |
http://screenshotcomparison.com/comparison/203655 Some blocky artifacts are more obvious. Just FYI. |
@xlz can you also upload the source somewhere? I think it'd be interesting to compare it with mozjpeg too (not sure if GIMP uses mozjpeg already) |
@xlz Try running guetzli with a higher quality. This shouldn't happen if you use quality 94 or higher. Guetzli works in a way where image degradation starts to happen everywhere at roughly the same quality settings, it doesn't try to 'save' the easy areas from degradation. |
I did some tests on my own product images, it seems
After reading |
@vinhlh Size comparisons make sense between images equivalent in quality. I've taken a look at your repository. It seems that you've taken a set of images and did two things with them: Note that (b) doesn't introduce any loss at all. You can do (b) to Guetzlified images too. We've decided not to do anything like (b) in Guetzli, because progressive JPEGs are up to 2x slower to decode. I've tried re-encoding Guetzlified images in your corpus. The results, compared to yours only-Guetzli and only-reencoding, look as follows (percentages differ slightly from yours, because I've compared the total size change as opposed to averaging the percentages across images):
Thanks for assembling this image corpus and pointing me at it; on most corpora I've seen re-encoding as progressive had smaller gains. |
@robryk sorry, I didn't know
There's also a Google guy, recommends an optimal Butteraugli score = 1.1. I will do more tests with baseline images and a larger set. |
@niutech who cares? it's a proprietary software, so it's irrelevant. |
@magicgoose If JPEGMini gets significantly better results I'm sure a lot of people would like to see an OSS solution become more competitive |
You all are doing amazing work. Depending on circleci.com you may request
certain demands. Within reason!!!
…On Mar 27, 2017 2:07 PM, "magicgoose" ***@***.***> wrote:
@niutech <https://github.com/niutech> who cares? it's a proprietary
software, so it's irrelevant.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#10 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AZNZWIf2EPSDjJhlnT9kq3GNANoVfQGUks5rqBb_gaJpZM4LdX6Q>
.
|
@acdha yeah that's true, being more competitive is always good. Regardless of whether there exists some non-free software which is claimed to be more powerful. |
I'd like to remind everyone that any benchmark that only looks at file sizes is essentially invalid. Any such benchmark can simply be won by the worst codec that lowers image quality more than all other tools. You can say one codec is better than another if it wins on both file size and quality. But also be careful with "close enough" quality. In JPEG file size grows exponentially with increase in quality, so even differences in quality that are so small that you'd say "looks the same to me", can significantly vary file size, and not because the codec is technically better, but because the settings that were used or re-encoding process allowed it to lose more humanly-imperceptible information. |
In another issue @pflacson posted a link to a direct comparison: https://css-ig.net/articles/guetzli Note about the comparison: it uses DSSIM metric, but Guetzli deliberately optimizes for its own Butteraugli metric. It definitely succeeds optimizing for its own metric, so in the end it result depends on what measurement you believe is more accurate (and neither of them is perfect). |
Thanks for bringing here that comparative, Kornel. Also, I must say that this week guetzli has received a set of enhancements that have notably reduced the resources (memory to encode an image went from 300 to 125 |
@jmespadero Small correction: it's 125 bytes per pixel, not KB per pixel. |
True. Receive one more time my congratulations for the set of optimizations and enhancements you have done recently. |
I've tested with a larger set: 500 product images, this is the result:
|
@vinhlh what about quality metrics? |
Note: tested on a |
@vinhlh and what's the point of this comparison? |
@vinhlh I'm a bit late to this discussion but I looked at your image comparisons and the mozjpeg colors seem to be significantly different to me. I do not see any difference with the guetzli ones. |
If color is significantly off in the entire image it's almost always not due to compression, but merely due to embedded color profile being interpreted differently. Try converting images to sRGB or double-check that the same profile is embedded in the same way by all tools you compare. |
I ran another test with +2k user images from a forum:
|
Yes, I read that. I do not see a quality loss between guetzli vs guetzli + moz. The only diff I see is between original and guetzli at sharp edges. |
Again, please read the linked article, and see the example which shows 40% difference in compression efficiency and no obvious quality loss — by doing nothing. |
So what does it matter? Images are looked at by humans, if there is no visible difference for the human eye.. |
Btw. in that example the quality difference is visible at the edges, the autor of the post messed up the file description. The one on the right has sharper edges. |
"no visible difference" is a fine goal, but your benchmark measures a different thing than what you think. "smaller with no visible difference" does not actually decide which codec is better, because it does require a Pareto improvement. Even summing up file sizes is subject to Simpson's paradox. Basically, it's easy to get statistics wrong. Codec comparison is a tricky 2D problem, and it's tempting to reduce it to a 1D problem, but that reduction is harder than it seems. I can make any codec win this way. I've made a presentation about this that describes the fallacies in depth: |
I never made those claims, its also not my comparison, see previous comments. I just ran it with my inventory because the prev. test was run on product images with solid color backgrounds, all encoded the same way... so there was no mix of different encoded images, qualities, sizes, content... The goal for me is to cut file size/storage requirements without sacrificing (human visible) quality. This needs to work on thousands of different images, so it can't just work on a specific kind. |
The method you've used would have shown similar results if you've used the same codec, any JPEG codec, twice. It shows that re-encoding and small variations in quality make a big difference. It says nothing about relative performance of codecs. Getting smaller files by re-encoding them at quality you're OK with is absolutely fine, but posting that in a thread about "comparison with mozjpeg" makes misleading claims. |
So you mean, if I run guetzli again (guetzli + guetzli), I end up with a comparable result as to guetzli + moz progressive? |
I mean that a simple method of "re-encoded a bunch of images, got smaller files, and quality is almost the same" is so inaccurate that it will show you a "improvement" even if you compare the same codec with itself. Since a codec obviously can't be better than itself, it shows that the method is misleading. It's a perfectly valid thing to do if you want to have smaller files. It's just not a codec benchmark. Take a bunch of random files, re-encode with mozjpeg at q 90. Because lossy codecs can produce basically any file size they want, looking at just at the file size is not enough. Because in JPEG relationship between quality and file size is close to exponential, differences in quality smaller than human can see create file size differences big enough to seem important. It doesn't mean technical improvement in the codec, it means quality was set higher than necessary. |
Lets just say, I re-encode all images with q90, since I do not have the original raw file, the result will be worse than doing the same with the original file. It could even be that the file size increases because it was encoded with a smaller quality before. |
One can reduce filesizes for JPEG that is not lossy using MozJPEG. This is exactly what ImageOptim does (unless you turn on lossy, which is not default). @Slind14 usually filesizes go down in recompess-lossy. I have not seen it go up. |
Doesn't guetzli claim to be lossless too?
…On 30 Jul 2017 04:25, "Tom Byrer" ***@***.***> wrote:
One can reduce filesizes for JPEG that is not lossy using MozJPEG. This is
exactly what ImageOptim <https://imageoptim.com/mac> does (unless you
turn on lossy, which is not default).
@Slind14 <https://github.com/slind14> usually filesizes go down in
recompess-lossy. I have not seen it go up.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#10 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ADu7tdLjvQ3bm8PUxaSZkMj9mm5zIBRWks5sS-mfgaJpZM4LdX6Q>
.
|
Guetzli is lossy. It just claims to be smart about how much lossy it is. |
@Slind14 Yes, lossy. Guetzli tries to be "psychovisually lossless" at default settings. However, the psychovisual model is not perfect and some observable loss may occur. Generally the worst artefacts in a Guetzli image should be less visible than with other JPEG codecs at similar byte budgets. |
So, the best way would be to re-encode images with a "medium-high" quality level (e.g. 80) then pick these if they are +10% smaller than the original, if not, pick the original? |
If you have critical visual quality requirements you should start from a lossless source image. If you don't have that, there is no point in using Guetzli IMHO. If you do have a lossless source image Guetzli will be a good fit to compress the image while minimising visual quality loss. If however you have bandwidth restrictions and don't care about quality there are compressors out there that are better suited than Guetzli. In no case does it ever make sense to compress an image multiple times. Doing so will have an effect, but never one that can't be achieved by choosing the right compressor in the first place and using the correct setting. There will always be other ways to get smaller images or higher quality images. Guetzli only attempts to find a good compromise without requiring you to choose the best setting for each image. |
I do not have the source image every time. Images are from users, I'm looking for a way to improve mobile loading (save bandwidth) and storage requirements. |
In that case I would not look into running Guetzli multiple times but analyse if the benefit of Guetzli (better visual quality) is worth it. If your application/users don't care about a few artefacts it would make sense to choose a compressor which is capable of much smaller file sizes. |
README says:
But what about mozjpeg? It existed long before this project, by the way — 1.0 release was in Mar 5, 2014, and it does similar job.
The text was updated successfully, but these errors were encountered: