The implementation of "Learning Dual-Pixel Alignment for Defocus Deblurring".
- The code has been tested with the following environment
- Ubuntu 18.04
- Python 3.7.9
- PyTorch 1.7.0
- cudatoolkit 10.0.130
- NVIDIA TITAN RTX GPU
$ cd DPANet
$ python setup.py build develop
- Download our pre-trained model and put the
final.pth
into./checkpoint
folder
$ cd DPANet
$ python test.py
For more results, you can refer to More results on DPDD.
- First, crop the images of DPDD train set into 512*512 patches using the same settings as DPDNet. (You can use
$ python ./image_to_patch_filter.py
from DPDD to get the patches.) - After getting the training patches, please organize the training dataset according to our code implementation:
$ mkdir dpdd_datasets/dpdd_16bit
, then move the train/test folder into the directory just created.
$ cd DPANet
$ python train.py
During training, we first train DPANet with MSE loss. After that, we choose the checkpoint that gives the best result among all the epochs (about 300 epochs, example ckpt trained with MSE) and finetune it with Charbonnier loss.
Here we give results of different methods on DPDD and PIXEL datasets.