Renan A. Rojas-Gomez*,
Teck-Yian Lim*,
Alexander G. Schwing,
Minh N. Do,
Raymond A. Yeh1
(*Equal Contribution)
University of Illinois at Urbana-Champaign, Purdue University1
This is the official implementation of "Learnable Polyphase Sampling for Shift Invariant and Equivariant Convolutional Networks" accepted at NeurIPS 2022. If you use this code or found it helpful, please consider citing:
@inproceedings{rojas-neurips2022-learnable, title = {Learnable Polyphase Sampling for Shift Invariant and Equivariant Convolutional Networks}, author = {Rojas-Gomez$^*$, Renan A and Lim$^*$, Teck Yian and Schwing, Alexander G and Do, Minh N and Yeh, Raymond A} booktitle = {Neural Information Processing Systems (NeurIPS)}, year = {2022}, note = {($^*$ Equal Contribution)}, }
All our experiments were executed using:
- python v3.8.10
- pytorch-lightning v1.4.0
- torchvision v0.10.0
- pytorch-lightning v1.4.0
- cudatoolkit v10.2.89
- opencv-python v4.5.4.60
For a full list of requirements, please refer to learn_poly_sampling/env_reqs.yml. To install the dependencies, please first install mini-conda and execute:
cd learn_poly_sampling/
conda env create -f env_reqs.yml
To check the installation, execute the following:
make
For notebook demonstrations of our proposed LPS (LPD and LPU) layers, please refer to the demo directory.
To run the notebook, please execute:
conda install -c conda-forge notebook
conda install -c conda-forge nb_conda_kernels
conda install -c conda-forge matplotlib
jupyter-notebook demo
-
Download the ILSVRC2012 dataset from its official repository, uncompress it into the dataset folder (e.g.
/learn_poly_sampling/datasets/ILSVRC2012
) and split it into train and val partitions using this script. -
Classification accuracy or shift consistency can be computed by setting the
--eval_mode
flag as eitherclass_accuracy
orshift_consistency
, respectively.
To reproduce our results in Tab. 2 & 3, run the scripts included in learn_poly_sampling/scripts
with our pre-trained models.
Please refer to the link above to download all our pre-trained classification models. Note that our evaluation scripts assume the checkpoints are stored at learn_poly_sampling/checkpoints/classification
.
We thank the authors of antialiased-cnns, Adaptive-anti-Aliasing , and truly_shift_invariant_cnns for open-sourcing their code, which we refer to and used during the development of this project.