Clone this wiki locally
I'm not going to explain why in this page, I'm only gonna explain how:
Get Essential Info using PNGOUT, specifically, get the colorspace info and how filters are applied in the original image.
-q(faster compress) and
--filters=01234me[p]b(p is omitted if the original image is with filters0-4) to find good filters. All filters that produce small image files(file size within a range according to the smallest one) are marked and will be used in next stage(without
-q). If colorspace reduction is allowed(grayscale / palette), it done by zopflipng.
zopflipng --iterations=numto do heavy compression with the chosen filters. The
--iterationsargument could be treated as how many times you want zopflipng to try with the deflate compression. You can specify this number when you run PNGOptim. After this stage, a small image file with optimal filter is produced. (Theoretically, there is no optimal filter because filter serves as to make the deflation better, so with different deflater or even different initial table/argument of the same deflater, the optimal filter might vary. However it doesn't differ that much.)
With the file produced in stage 3 as input, Use
pngout -f6to do deflate compression while reuse original filter (in stage 3) line by line. After that, use
-r(random initial table) to do several trials. You can specify how many times pngout should try when you run PNGOptim. After this stage, the smallest image file produced by pngout is selected.
Finally compare zopflipng's output with pngout's output and use the smallest.
TODO: Sometimes pngout may outperforms zopflipng in deflation in almost every try, while sometimes the opposite. So there should be space to optimize within stage 3 & stage 4.
However, only if the user let PNGOptim to try many times, the problem is really a problem. In most scenarios, it's not encouraged to do that.