Skip to content

Code for "Online Learned Continual Compression with Adaptive Quantization Modules"

Notifications You must be signed in to change notification settings

yyht/adaptive-quantization-modules

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Online Learned Continual Compression with Adaptive Quantization Modules (ICML 2020)

Stacking Quantization blocks for efficient lifelong online compression
Code for reproducing all results in our paper which can be found here
You can find a quick demo on Google Colab here

(key) Requirements

  • Python 3.7
  • Pytorch 1.4.0

Structure

├── Common 
    ├── modular.py          # Module (QLayer) and Stacked modules (QStack). Includes most key ops, such as adaptive buffer        
    ├── quantize.py         # Discretization Ops (GumbelSoftmax, Vector/Tensor Quantization and Argmax Quantization)
    ├── model.py            # Encoder, Decoder, Classifier Blocks 
├── config                  # .yaml files specifying different AQM architectures and hyperparameters used in the paper 
├── Lidar
    ├── ....                # files to run LiDAR experiments 
├── Utils             
    ├── args.py             # Contains command-line args
    ├── buffer.py           # Basic buffer implementation. Handled raw and compressed representations
    ├── data.py             # CL datasets and dataloaders
    ├── utils.py            # Logging / Saving & Loading Models, Args, point cloud processing
    
├── gen_main.py             # files to run the offline classification (e.g. Imagenet) experiments 
├── eval.py                 # evaluation loops for drift, test acc / mse, and lidar
├── cls_main.py             # files to run the online classification (e.g. CIFAR) experiments

├── reproduce.txt           # All command and information to reproduce the results in the paper

Acknowledgements

We would like to thank authors of the following repositories (from which we borrowed code) for making the code public.
Gradient Episodic Memory
VQ-VAE
VQ-VAE-2
MIR

Contact

For any questions / comments / concerns, feel free to open an issue via github, or to send me an email at
lucas.page-caccia@mail.mcgill.ca.

We strongly believe in fully reproducible research. To that end, if you find any discrepancy between our code and the paper, please let us know, and we will make sure to address it.

Happy streaming compression :)

Citation

If you find this code useful please cite us in your work.

@article{caccia2019online,
  title={Online Learned Continual Compression with Adaptive Quantization Modules},
  author={Caccia, Lucas and Belilovsky, Eugene and Caccia, Massimo and Pineau, Joelle},
  journal={Proceedings of the 37th International Conference on Machine Learning},
  year={2020}
}

About

Code for "Online Learned Continual Compression with Adaptive Quantization Modules"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 97.6%
  • Cuda 1.5%
  • C++ 0.9%