Model cheng2022-attn cannot achieve high bit-rate and lack of GMM #179
-
I noticed that the model cheng2020-attn can only achieve about 0.8 bpp in compressAI implementation and its original paper. I am not sure if N=192 limited model's high bit performance? The implementation of cheng2020-attn in compressAI has only 6 levels of quality. Is this model only trained with first 6 lambda in https://interdigitalinc.github.io/CompressAI/zoo.html ? Can it achieve higher rate with larger lambda? By the way, compressAI does not support GMM (Gaussian Mixture Model), does it really affect performance as much as the original paper claims? Thanks for any discussion & answers! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
We used the 6 lambda operation points as defined in the original paper, but yes, you can train models for a higher lambda cover a wider range of bitrates. You are right, increasing the number of channel at high bitrate may help. GMM is supposed to gain a little bit, although not much in our experiments. You can compare our curves and result json files with curves from the paper that include all the proposed tools. Ours mostly report the performance of the overall architecture (residual blocks). |
Beta Was this translation helpful? Give feedback.
We used the 6 lambda operation points as defined in the original paper, but yes, you can train models for a higher lambda cover a wider range of bitrates. You are right, increasing the number of channel at high bitrate may help.
GMM is supposed to gain a little bit, although not much in our experiments. You can compare our curves and result json files with curves from the paper that include all the proposed tools. Ours mostly report the performance of the overall architecture (residual blocks).