Description
New AAN IDCT implementation which I'm working on as part of jpeg decoder optimization/refactoring provides a faster and more accurate IDCT calculation.
Current conversion pipeline has this "conform to libjpeg" step:
This does a huge quality hit (well, not really 'huge' if we are not talking about a couple of re-encodings) with noise. This is especially noticeable for so called 'generational loss'. Below there's a comparison of rounding vs no-rounding jpeg re-encoding without any change to the original image - no quality or subsampling changed.
*Bottom row is a difference between original image and re-encoded one
Of course users won't re-encode their images 5 or even 25 times but this example shows the uselessness of IDCT rounding. Right now ImageSharp has a slower but more accurate IDCT which should be a 'feature', not a 'downside' which must be hidden. And on top of that it takes extra time to compute so it's a win-win :P
Issues like #245 can't be solved simply because JFIF is a lossy image container - 1/255 pixel miss is an expected trade-off.
Update
Same applies to downscaling decoding from #2076.
No-rounding code provides slightly better PSNR results but linked PR would provide IDCT algorithms with rounding before this issue is resolved.