-
Notifications
You must be signed in to change notification settings - Fork 355
compression reduces as jpeg quality increases #8
Comments
The probable cause of this is so much longer runs of ones in the initial representation and the vastly increased size of the DC blocks. |
I think the point was to perform well on JPEG, the most common image format. You want to compress lossy files just because there are a lot of them, while not making the losses worse. It is not misleading to say it is a lossless compression algorithm because uncompressed(compressed(data))==data |
Since this algorithm relies on the predictiveness of neighboring blocks, it is actually to be expected that it performs better on images with higher compression. Sharp details make predicting the next block harder, and higher compression tends to remove those details. If you set the quality to as low as 20, you will start to notice increasingly large areas where most of the details are gone, leaving you with an image with lots of 8x8 solid color blocks. Because of quantization this color will often be quite different visually from the original average block color, increasing predictiveness even more. Look, I'm not the expert here, this is just what I gathered from reading a blog post about it. |
With very high 'quality' settings, you're essentially storing a lot of extra sensor noise that your camera made. This sensor noise is largely random in nature and can be difficult to compress well. |
I'm very interested in the lossless side of lepton. It is a legitimate requirement in dropbox's particular case, but in general, it doesn't make point insisting on lossless while jpeg is already lossy.
I'm not sure if lepton can potentially compress a lossless format, so I set jpeg quality to 100, just to approximate a lossless input. My input image is a 16-bit depth lossless TIFF image of mammogram. I convert it to jpeg with " -auto-level -depth 8", and with -quality value 100, 80 and 20. The file sizes are listed below:
This result is very counter-intuitive to me, as I believe less being done by jpeg means more space of optimization for lepton, yet lepton performs worse. My suspicion is that lepton is only compressing the compression artifacts of jpeg, and if we somehow inject it with a true lossless input, there will probably be little compression. Unfortunately I'm not able to provide the image due to the license, but I strongly suspect that this is a general behavior of the lepton algorithm.
The text was updated successfully, but these errors were encountered: