Skip to content

YingqianWang/DistgSSR

Repository files navigation

DistgSSR: Disentangling Mechanism for Light Field Statial Super-Resolution


This is the PyTorch implementation of the spatial SR method in our paper "Disentangling Light Fields for Super-Resolution and Disparity Estimation". Please refer to our paper and project page for details.

News and Updates:

  • 2022-04-03: Checkpoints DistgSSR_4xSR_9x9.pth.tar is available.
  • 2022-03-10: Checkpoints DistgSSR_4xSR_8x8.pth.tar is available.
  • 2022-02-22: Optimize LFdivide and LFintegrate, and modify our codes to enable inference with a batch of patches.
  • 2022-02-22: Checkpoints DistgSSR_4xSR_6x6.pth.tar and DistgSSR_4xSR_7x7.pth.tar are available.
  • 2022-02-22: Our DistgSSR has been added into the repository BasicLFSR.
  • 2022-02-16: Our paper is accepted by IEEE TPAMI.

Preparation:

1. Requirement:

  • PyTorch 1.3.0, torchvision 0.4.1. The code is tested with python=3.6, cuda=9.0.
  • Matlab for training/test data generation and performance evaluation.

2. Datasets:

  • We used the EPFL, HCInew, HCIold, INRIA and STFgantry datasets for training and test. Please first download our dataset via Baidu Drive (key:7nzy) or OneDrive, and place the 5 datasets to the folder ./Datasets/.

3. Generating training/test data:

  • Run Generate_Data_for_Train.m to generate training data. The generated data will be saved in ./Data/train_kxSR_AxA/.
  • Run Generate_Data_for_Test.m to generate test data. The generated data will be saved in ./Data/test_kxSR_AxA/.

4. Download our pretrained models:

We provide the models of each angular resolution (2×2 to 9×9) for 2×/4× SR. Download our models through the following links:

Upscaling Factor Angular Resolution Channel Depth Download Link
2×SR 5×5 32 DistgSSR_2xSR_5x5_C32.pth.tar
2×SR 2×2 64 DistgSSR_2xSR_2x2.pth.tar
2×SR 3×3 64 DistgSSR_2xSR_3x3.pth.tar
2×SR 4×4 64 DistgSSR_2xSR_4x4.pth.tar
2×SR 5×5 64 DistgSSR_2xSR_5x5.pth.tar
2×SR 6×6 64 DistgSSR_2xSR_6x6.pth.tar
2×SR 7×7 64 DistgSSR_2xSR_7x7.pth.tar
2×SR 8×8 64 DistgSSR_2xSR_8x8.pth.tar
2×SR 9×9 64 DistgSSR_2xSR_9x9.pth.tar
4×SR 5×5 32 DistgSSR_4xSR_5x5_C32.pth.tar
4×SR 2×2 64 DistgSSR_4xSR_2x2.pth.tar
4×SR 3×3 64 DistgSSR_4xSR_3x3.pth.tar
4×SR 4×4 64 DistgSSR_4xSR_4x4.pth.tar
4×SR 5×5 64 DistgSSR_4xSR_5x5.pth.tar
4×SR 6×6 64 DistgSSR_4xSR_6x6.pth.tar
4×SR 7×7 64 DistgSSR_4xSR_7x7.pth.tar
4×SR 8×8 64 DistgSSR_4xSR_8x8.pth.tar
4×SR 9×9 64 DistgSSR_4xSR_9x9.pth.tar

Train:

  • Set the hyper-parameters in parse_args() if needed. We have provided our default settings in the realeased codes.
  • Run train.py to perform network training.
  • Checkpoint will be saved to ./log/.

Test on the datasets:

  • Run test_on_dataset.py to perform test on each dataset.
  • The original result files and the metric scores will be saved to ./Results/.

Test on your own LFs:

  • Place the input LFs into ./input (see the attached examples).
  • Run demo_test.py to perform spatial super-resolution. Note that, the selected pretrained model should match the input in terms of the angular resolution.
  • The super-resolved LF images will be automatically saved to ./output.

Results:

Quantitative Results:

Visual Comparisons:

Efficiency:

Angular Consistency:

Citiation

If you find this work helpful, please consider citing:

@Article{DistgLF,
    author    = {Wang, Yingqian and Wang, Longguang and Wu, Gaochang and Yang, Jungang and An, Wei and Yu, Jingyi and Guo, Yulan},
    title     = {Disentangling Light Fields for Super-Resolution and Disparity Estimation},
    journal   = {IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
    year      = {2022},   
}

Contact

Welcome to raise issues or email to wangyingqian16@nudt.edu.cn for any question regarding this work.

About

[TPAMI 2022] DistgSSR: Disentangling Mechanism for Light Field Statial Super-Resolution

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published