Skip to content

IndigoPurple/DEQSCI

Repository files navigation

Deep equilibrium models for video snapshot compressive imaging (AAAI'2023)

In this repository we provide code of the paper:

Deep equilibrium models for video snapshot compressive imaging

Yaping Zhao, Siming Zheng, Xin Yuan

arxiv link: https://arxiv.org/abs/2201.06931

Highlights

Results

Compressed Measurement Measurement Measurement Measurement
Our Reconstruction Result Ours Ours Ours
Compressed Measurement Measurement Measurement Measurement
Our Reconstruction Result Ours Ours Ours

Requirements

For pre-requisites, run:

conda env create -f environment.yml
conda activate deq

Pre-trained models

Pre-trained models are provided in the models folder, and testing datasets in the data/test_gray/ folder.

Therefore, you can quickly get started without additional downloads required.

Getting started

To reproduce the main results from our paper, simply run:

sh test_ffdnet.sh

or

python ./video_sci_proxgrad.py --savepath ./save/test_ffdnet/ --testpath ../data/test_gray/ --loadpath ./models/ffdnet.ckpt --denoiser ffdnet --and_maxiters 180 --inference True

Testing other models

Pretrained DE-GAP-CNN and DE-GAP-RSN-CNN are also provided in the models folder. For testing those models, sun:

sh test_cnn.sh

or

sh test_rsn_cnn.sh

Training dataset

The training dataset is available at OneDrive and Baidu Netdisk (password: df8a). Download and unzip the file into the folder data/DAVIS/matlab/, of which the file structure should be:

  • DAVIS/
    • matlab/
      • gt/
      • measurement/
      • data_generation.m
      • mask.mat

Training new models

For training DE-GAP-FFDnet from scratch, you could simply run:

sh train_ffdnet.sh

or

python ./video_sci_proxgrad.py \
--savepath ./save/train_ffdnet/ \
--trainpath ../data/DAVIS/matlab/ \
--testpath ../data/test_gray/ \
--denoiser ffdnet

If you want to try different settings, you could find, use and add the arguments in the video_sci_proxgrad.py.

For example, you may run:

python ./video_sci_proxgrad.py \
--batch_size 1 \
--lr 0.0001 \
--lr_gamma 0.1 \
--sched_step 10 \
--print_every_n_steps 100 \
--save_every_n_steps 1000 \
--savepath ./save/train_ffdnet/ \
--trainpath ./data/DAVIS/matlab/ \
--testpath ./data/test_gray/ \
--loadpath ./save/ffdnet.ckpt \
--denoiser ffdnet \
--gpu_ids 0

Citation

Cite our paper if you find it interesting!

@inproceedings{zhao2023deep,
  title={Deep Equilibrium Models for Snapshot Compressive Imaging},
  author={Zhao, Yaping and Zheng, Siming and Yuan, Xin},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={37},
  number={3},
  pages={3642--3650},
  year={2023}
}

@article{zhao2022deep,
  title={Deep equilibrium models for video snapshot compressive imaging},
  author={Zhao, Yaping and Zheng, Siming and Yuan, Xin},
  journal={arXiv preprint arXiv:2201.06931},
  year={2022}
}

@article{zhao2022mathematical,
  title={Mathematical Cookbook for Snapshot Compressive Imaging},
  author={Zhao, Yaping},
  journal={arXiv preprint arXiv:2202.07437},
  year={2022}
}

About

Python (PyTorch) implementation of the AAAI paper "Deep Equilibrium Models for Video Snapshot Compressive Imaging"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published