Skip to content
Official Pytorch Implementation for Trinity of Pixel Enhancement: a Joint Solution for Demosaicing, Denoising and Super-Resolution
Branch: master
Clone or download
Latest commit fe82932 Jun 2, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
TorchTools dataset release May 27, 2019
datasets dataset release May 27, 2019
figures delete May 19, 2019
model delete useless May 16, 2019
script script revised Jun 2, 2019
.gitignore delete May 27, 2019
LICENSE Create LICENSE May 28, 2019
README.md dataset May 27, 2019
test.py bug May 22, 2019
train.py test-datasets May 21, 2019
train_mat.py dataset release May 27, 2019

README.md

TENet [PDF] [pixelshift200]

Trinity of Pixel Enhancement: a Joint Solution for Demosaicing, Denoising and Super-Resolution

By Guocheng Qian, Jinjin Gu, Jimmy S. Ren, Chao Dong, Furong Zhao, Juan Lin

Citation

Please cite the following paper if you feel TENet is useful to your research

@article{qian2019trinity,
  title={Trinity of Pixel Enhancement: a Joint Solution for Demosaicking, Denoising and Super-Resolution},
  author={Qian, Guocheng and Gu, Jinjin and Ren, Jimmy S and Dong, Chao and Zhao, Furong and Lin, Juan},
  journal={arXiv preprint arXiv:1905.02538},
  year={2019}
}

Resources Collection

Pretrained models

GoogleDrive

Test data

GoogleDrive

PixelShift200 dataset

Pixelshift200 website

Quick Test

Dependencies

  • Python >= 3
  • PyTorch 0.4.1 (CUDA version >= 7.5 if installing with CUDA. More details)
  • Tensorflow (cpu version is enough, only used for visualization in training)
  • Python packages: pip install opencv-python scipy scikit-image
conda create --name pytorch04
conda activate pytorch04
conda install pytorch=0.4.1 cuda90 torchvision tensorflow -c pytorch   
pip install opencv-python scipy scikit-image  

Test Models

  1. Clone this github repo.

    git clone https://github.com/guochengqian/TENet
    cd TENet
    
  2. Place your own input images in $YourInputPath folder. You will save output in $YourSavePath folder. Input images should be Bayer Raw images (bayer pattern is rggb).

  3. Run test.

    1. test model trained by synthesis datasets

      sh ./script/test_tennet2-dn-df2k.sh  
      
    2. test model trained by PixelShift200 datasets

      sh ./script/test_tenet2-dn-ps200.sh  
      

      Don't forget to change $YourInputPath and $YourSavePath in .sh file.

How to Train

We train our model both on synthesis datasets(DF2k) and our proposed full color sampled real word 4k dataset PixelShift200 (to be released soon).

  1. Data preparation

    1. Synthesis data preparation

      1. Download (DF2k) dataset, which is a combination dataset of DIV2K and Flickr2K
      2. Crop images into resolution 256*256, using followed code:
        python ./dataset/crop_images.py
        
      3. generate txt file used for training
        python ./dataset/generate_train_df2k.py
        
    2. PixelShift200 data preparation

      1. Download Pixelshift200. They are .mat format, having 4 channels (R, Gr, Gb, B).
      2. Crop images into 512*512, using followed code:
        python ./dataset/crop_mats.py
        
      3. generate txt file used for training
        python ./dataset/generate_train_mat.py
        
  2. Train model on synthesis dataset

    sh script/run_tenet2-dn-df2k.sh
    
  3. Train model on PixelShift200 dataset

    sh script/run_tenet2-dn-ps200.sh    
    

TENet

Our approach can be divided into two parts, the first part is a mapping of joint denoising and SR, and the second part converts the SR mosaic image into a full color image. The two parts can be trained and performed jointly. The network structure is illustrated as follows.

PixelShift200 dataset

We employ advanced pixel shift technology to perform a full color sampling of the image. Pixel shift technology takes four samples of the same image, and physically controls the camera sensor to move one pixel horizontally or vertically at each sampling to capture all color information at each pixel. The pixel shift technology ensures that the sampled images follow the distribution of natural images sampled by the camera, and the full information of the color is completely obtained. In this way, the collected images are artifacts-free, which leads to better training results for demosaicing related tasks.

Download dataset from pxielshift200 website.

Result

Results on Real Images

You can’t perform that action at this time.