Skip to content
This repository has been archived by the owner on Feb 14, 2023. It is now read-only.

compression reduces as jpeg quality increases #8

Closed
aaalgo opened this issue Jul 15, 2016 · 4 comments
Closed

compression reduces as jpeg quality increases #8

aaalgo opened this issue Jul 15, 2016 · 4 comments

Comments

@aaalgo
Copy link

aaalgo commented Jul 15, 2016

I'm very interested in the lossless side of lepton. It is a legitimate requirement in dropbox's particular case, but in general, it doesn't make point insisting on lossless while jpeg is already lossy.
I'm not sure if lepton can potentially compress a lossless format, so I set jpeg quality to 100, just to approximate a lossless input. My input image is a 16-bit depth lossless TIFF image of mammogram. I convert it to jpeg with " -auto-level -depth 8", and with -quality value 100, 80 and 20. The file sizes are listed below:

  • TIFF: 18M (6.6M when converted to TIFF with -auto-level -depth 8)
  • jpeg/100: 5.3M ==> lep: 4.8M, 91.17%
  • jpeg/80: 636K ==> lep: 519K, 81.63%
  • jpeg/20: 82K ==> lep: 43K, 52.55%

This result is very counter-intuitive to me, as I believe less being done by jpeg means more space of optimization for lepton, yet lepton performs worse. My suspicion is that lepton is only compressing the compression artifacts of jpeg, and if we somehow inject it with a true lossless input, there will probably be little compression. Unfortunately I'm not able to provide the image due to the license, but I strongly suspect that this is a general behavior of the lepton algorithm.

@aaalgo aaalgo changed the title compression reduces with jpeg quality = 100 compression reduces as jpeg quality increases Jul 15, 2016
@IDisposable
Copy link

The probable cause of this is so much longer runs of ones in the initial representation and the vastly increased size of the DC blocks.

@dilijev
Copy link

dilijev commented Jul 15, 2016

I think the point was to perform well on JPEG, the most common image format. You want to compress lossy files just because there are a lot of them, while not making the losses worse.

It is not misleading to say it is a lossless compression algorithm because uncompressed(compressed(data))==data

@bartvanandel
Copy link

Since this algorithm relies on the predictiveness of neighboring blocks, it is actually to be expected that it performs better on images with higher compression. Sharp details make predicting the next block harder, and higher compression tends to remove those details.

If you set the quality to as low as 20, you will start to notice increasingly large areas where most of the details are gone, leaving you with an image with lots of 8x8 solid color blocks. Because of quantization this color will often be quite different visually from the original average block color, increasing predictiveness even more.

Look, I'm not the expert here, this is just what I gathered from reading a blog post about it.

@danielrh
Copy link
Contributor

With very high 'quality' settings, you're essentially storing a lot of extra sensor noise that your camera made. This sensor noise is largely random in nature and can be difficult to compress well.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants