Qiucheng Wu1*,
Yifan Jiang2*,
Junru Wu3*,
Victor Kulikov6,
Vidit Goel6,
Nikita Orlov6,
Humphrey Shi4,5,6,
Zhangyang Wang2,
Shiyu Chang1
1University of California, Santa Barbara, 2The University of Texas at Austin, 3Texas A&M University, 4UIUC, 5University of Oregon, 6Picsart AI Research (PAIR)
*denotes equal contribution.
This is the official implementation of the paper "Broad Spectrum Image Deblurring via An Adaptive Super-Network".
In blurry images, the degree of image blurs may vary drastically due to different factors, such as varying speeds of shaking cameras and moving objects, as well as defects of the camera lens. However, current end-to-end models failed to explicitly take into account such diversity of blurs. This unawareness compromises the specialization at each blur level, yielding sub-optimal deblurred images as well as redundant post-processing. Therefore, how to specialize one model simultaneously at different blur levels, while still ensuring coverage and generalization, becomes an emerging challenge. In this work, we propose Ada-Deblur, a super-network that can be applied to a "broad spectrum" of blur levels with no re-training on novel blurs. To balance between individual blur level specialization and wide-range blur levels coverage, the key idea is to dynamically adapt the network architectures from a single well-trained super-network structure, targeting flexible image processing with different deblurring capacities at test time. Extensive experiments demonstrate that our work outperforms strong baselines by demonstrating better reconstruction accuracy while incurring minimal computational overhead. Besides, we show that our method is effective for both synthetic and realistic blurs compared to these baselines. The performance gap between our model and the state-of-the-art becomes more prominent when testing with unseen and strong blur levels. Specifically, our model demonstrates surprising deblurring performance on these images with PSNR improvements of around 1 dB.
Please setup the environment as follow:
conda create -n AdaDeblur python=3.7
conda activate AdaDeblur
conda install pytorch=1.1 torchvision=0.3 cudatoolkit=9.0 -c pytorch
pip install matplotlib scikit-image opencv-python yacs joblib natsort h5py tqdm
cd pytorch-gradual-warmup-lr; python setup.py install; cd ..
First, please download the GoPro (both GoPro_Large and GoPro_Large_all) and the RealBlur dataset and unzip them in the deblurring_release/Datasets
folder.
Your Datasets
directory tree should look like this
Datasets
└──
train
└──
input
└──
target
└──
test
└──
input
└──
target
├──
GoPro_Large_all
├──
RealBlur-J_ECC_IMCORR_centroid_itensity_ref
├──
RealBlur_J_test_list.txt
├──
RealBlur_R_test_list.txt
└──
RealBlur-R_BM3D_ECC_IMCORR_centroid_itensity_ref
Then, prepare our dataset with diverse blur levels:
python genNewGoPro.py
This step creates the blurs with different blur levels for training and evaluation. We also provide the data with different blur levels (based on GoPro dataset) here.
First, please download the pre-trained model weights here. Please put it in the deblurring_release/pretrained_models
folder.
To reproduce our results on GoPro dataset:
cd deblurring_release
python test.py
python calPSNR.py
To reproduce our results on RealBlur dataset:
cd deblurring_release
python test_realblur.py
python calPSNR_realblur.py
To start training on GoPro dataset with diverse blur levels:
cd deblurring_release
python train.py
training script will load config under deblurring_release/training.yml
.
Should you have any question, please contact qiucheng@ucsb.edu
This code is adopted from MPRNet.