Pytorch implementation of variational entropy-constrained vector quantization as described in Nonlinear Transform Coding.
Table below shows how the VECVQ models gets trained for 3 different bitrate regimes under two bivariate source distributions.
Low-rate | Medium-rate | High-rate | |
---|---|---|---|
Normal | |||
Banana |
In a virtual environment follow the steps below (verified on Ubuntu):
git clone https://github.com/ali-zafari/VECVQ VECVQ
cd VECVQ
python3 -m venv venv
source venv/bin/activate
pip install -U pip
pip install -r requirements.txt
All the configurations regarding dataloader, training strategy, and etc should be set in the config.py
, simply followed by the command:
python train.py
Model checkpoints and quantization plots will be saved under directory named ckpt
.
├─ models
| ├─ compression_model.py base compression model
| ├─ vecvq.py variational entropy-constrained VQ
|
├─ banana.py banana probability distribution
├─ source.py source data (LightningDataModule)
├─ config.py configurations file
├─ train.py main file to train the vecvq model
- TensorFlow Compression: Tensorflow implementation of Nonlinear Transform Coding
- Lossy Compression for Lossless Prediction: Borrowed Banana distribution from Yann Dubois
- Lightning: Pytorch framework to avoid boilerplate code
@article{balle2020nonlinear,
title={Nonlinear transform coding},
author={Ball{\'e}, Johannes and Chou, Philip A and Minnen, David and Singh, Saurabh and Johnston, Nick and Agustsson, Eirikur and Hwang, Sung Jin and Toderici, George},
journal={IEEE Journal of Selected Topics in Signal Processing},
year={2020},
publisher={IEEE}
}
@article{dubois2021lossyless,
title={Lossy compression for lossless prediction},
author={Dubois, Yann and Bloem-Reddy, Benjamin and Ullrich, Karen and Maddison, Chris J},
journal={Advances in Neural Information Processing Systems},
year={2021}
}