Skip to content
/ DPANet Public

The implementation of "Learning Progressive Dual-Pixel Alignment for Defocus Deblurring".

Notifications You must be signed in to change notification settings

liyucs/DPANet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DPANet

The implementation of "Learning Dual-Pixel Alignment for Defocus Deblurring".

Prerequisites

  • The code has been tested with the following environment
    • Ubuntu 18.04
    • Python 3.7.9
    • PyTorch 1.7.0
    • cudatoolkit 10.0.130
    • NVIDIA TITAN RTX GPU

Datasets

Training datasets

Testing datasets

Preparation (for DCNv2)

$ cd DPANet
$ python setup.py build develop

Test

$ cd DPANet
$ python test.py

For more results, you can refer to More results on DPDD.

Train

  • First, crop the images of DPDD train set into 512*512 patches using the same settings as DPDNet. (You can use $ python ./image_to_patch_filter.pyfrom DPDD to get the patches.)
  • After getting the training patches, please organize the training dataset according to our code implementation: $ mkdir dpdd_datasets/dpdd_16bit, then move the train/test folder into the directory just created.

Start training

$ cd DPANet
$ python train.py

During training, we first train DPANet with MSE loss. After that, we choose the checkpoint that gives the best result among all the epochs (about 300 epochs, example ckpt trained with MSE) and finetune it with Charbonnier loss.

Results

Here we give results of different methods on DPDD and PIXEL datasets.

About

The implementation of "Learning Progressive Dual-Pixel Alignment for Defocus Deblurring".

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published