Reimplementation of "Basis Prediction Networks for Effective Burst Denoising with Large Kernels" by using PyTorch.
The source code of the paper was implemented in TensorFlow 1, but not disclosed by the authors due to patent issues. A re-implementation in TensorFlow 2 was shared by Zhihao Xia, the first author of the paper, at https://github.com/likesum/bpn.
To ensure reproducible fidelity, this re-implementation of PyTorch version is based entirely on what is provided in the paper and its supplementary materials, and does not refer to the source code of the TensorFlow 2 version mentioned above. This re-implementation achieves results comparable to those shown in the paper.
The partial work is following https://github.com/z-bingo/kernel-prediction-networks-PyTorch.
numpy~=1.21.4
torch~=1.11.0.dev20211210+cu111
scikit-image~=0.19.2
imagesize~=1.3.0
configobj~=5.0.6
torchvision~=0.12.0.dev20211210+cu111
Pillow~=8.4.0
natsort~=8.0.1
tensorboardX~=2.4.1
Because of the huge amount of data in Open Images Dateset, re-implementing the experiments based on this dataset in the paper will bring huge unnecessary time and Energy consumption. Therefore, a simpler experiment to demonstrate is designed.
Additive white Gaussian noise (AWGN) with
The images in the ImageNet 2012 Validation dataset need to be downloaded to the data/train/ILSVRC2012-Val/
folder. The images in the BSD300, Kodak and SET14 datasets need to be downloaded to the data/test/BSD300/
, data/test/KODAK/
and data/test/SET14/
, respectively. Images in SET14 dataset has been provided for demonstration.
To train the BPN from scratch on ImageNet 2012 Validation dataset with AWGN:
python train_and_eval.py --config_file configs/AWGN_RGB.conf -c -m
Due to redeployment of code runs or accidental training interruptions, continue training on a previously saved checkpoint:
python train_and_eval.py --config_file configs/AWGN_RGB.conf -c -m -ckpt <previously saved checkpoint, best or step number>
If you want to train a BPN model for noise reduction on grayscale images, run:
python train_and_eval.py --config_file configs/AWGN_gray.conf -c -m
Whether the "color" entry in the config file is "True" or "False" determines that data_provider.py
will read the image data in RGB or grayscale.
Convert your own data into any format that Pillow can read (eg JPEG, PNG, TIF, etc.) and organize it into a specific folder. Write a config file according to your data characteristics and experimental needs and place it in the configs/
directory. run:
python train_and_eval.py --config_file configs/<your_config>.conf -c -m
My pre-trained models on ImageNet 2012 Validation dataset with AWGN of
The downloaded models for grayscale and color images should be placed in the models/checkpoints/AWGN_gray/
and models/checkpoints/AWGN_RGB/
directories respectively.
To test multi-frame denoising on color images with pre-trained BPN model, run:
python train_and_eval.py --config_file configs/AWGN_RGB.conf -c -m --eval -ckpt 75020
Similarly, to test multi-frame denoising on grayscale images with pre-trained BPN model, run:
python train_and_eval.py --config_file configs/AWGN_gray.conf -c -m --eval -ckpt 87260
Several representative image examples of the denoising results are provided below, and more result images can be found in results/
.
Ground Truth | Noisy (20.30dB) | Denoised (28.03dB) |
Ground Truth | Noisy (20.49dB) | Denoised (32.17dB) |
Ground Truth | Noisy (18.58dB) | Denoised (27.83dB) |
Ground Truth | Noisy (18.85dB) | Denoised (32.67dB) |
The quantitative evaluation results on the three test sets are also given as follows. In addition to the PSNR selected in the original paper, this re-implementation adds three additional quantitative image quality evaluation metrics, SSIM, RMSE, and Pearson R.
PSNR (dB) | SSIM | RMSE | R | |
---|---|---|---|---|
BSD300 | 33.76 | 0.943 | 0.021 | 0.996 |
KODAK | 35.07 | 0.941 | 0.018 | 0.996 |
SET14 | 33.21 | 0.924 | 0.023 | 0.996 |
average | 33.84 | 0.942 | 0.021 | 0.996 |
PSNR (dB) | SSIM | RMSE | R | |
---|---|---|---|---|
BSD300 | 31.43 | 0.918 | 0.028 | 0.992 |
KODAK | 33.76 | 0.917 | 0.021 | 0.993 |
SET14 | 32.24 | 0.909 | 0.026 | 0.995 |
average | 31.63 | 0.917 | 0.028 | 0.993 |
All the materials, including the codes and pretrained models, are made freely available for non-commercial use under the GNU GENERAL PUBLIC LICENSE Version 3.