Skip to content
master
Switch branches/tags
Go to file
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Online Learned Continual Compression with Adaptive Quantization Modules (ICML 2020)

Stacking Quantization blocks for efficient lifelong online compression
Code for reproducing all results in our paper which can be found here
You can find a quick demo on Google Colab here

(key) Requirements

  • Python 3.7
  • Pytorch 1.4.0

Structure

├── Common 
    ├── modular.py          # Module (QLayer) and Stacked modules (QStack). Includes most key ops, such as adaptive buffer        
    ├── quantize.py         # Discretization Ops (GumbelSoftmax, Vector/Tensor Quantization and Argmax Quantization)
    ├── model.py            # Encoder, Decoder, Classifier Blocks 
├── config                  # .yaml files specifying different AQM architectures and hyperparameters used in the paper 
├── Lidar
    ├── ....                # files to run LiDAR experiments 
├── Utils             
    ├── args.py             # Contains command-line args
    ├── buffer.py           # Basic buffer implementation. Handled raw and compressed representations
    ├── data.py             # CL datasets and dataloaders
    ├── utils.py            # Logging / Saving & Loading Models, Args, point cloud processing
    
├── gen_main.py             # files to run the offline classification (e.g. Imagenet) experiments 
├── eval.py                 # evaluation loops for drift, test acc / mse, and lidar
├── cls_main.py             # files to run the online classification (e.g. CIFAR) experiments

├── reproduce.txt           # All command and information to reproduce the results in the paper

Acknowledgements

We would like to thank authors of the following repositories (from which we borrowed code) for making the code public.
Gradient Episodic Memory
VQ-VAE
VQ-VAE-2
MIR

Contact

For any questions / comments / concerns, feel free to open an issue via github, or to send me an email at
lucas.page-caccia@mail.mcgill.ca.

We strongly believe in fully reproducible research. To that end, if you find any discrepancy between our code and the paper, please let us know, and we will make sure to address it.

Happy streaming compression :)

Citation

If you find this code useful please cite us in your work.

@article{caccia2019online,
  title={Online Learned Continual Compression with Adaptive Quantization Modules},
  author={Caccia, Lucas and Belilovsky, Eugene and Caccia, Massimo and Pineau, Joelle},
  journal={Proceedings of the 37th International Conference on Machine Learning},
  year={2020}
}

About

Code for "Online Learned Continual Compression with Adaptive Quantization Modules"

Resources

Releases

No releases published

Packages

No packages published