This repository implements the code of Deep Model-Based Super-Resolution with Non-uniform Blur
To train the code please first download COCO dataset available at: https://cocodataset.org.
python main_train.py -opt options/train_nimbusr.json
Pre-trained model is available at: model_zoo/DMBSR.pth
Our blur kernels are available for download here. They need to be added in the folder |-kernels
See test_model.ipynb to test the model on COCO dataset.
We achieve state-of-the-art results in super-resolution in the presence of spatially-varying blur. Here are some of the results we obtained. Feel free to test on your own sample using the testing notebook.
LR | SwinIR | BlindSR | USRNet | Ours | HR |
---|---|---|---|---|---|
LR | SwinIR | BlindSR | USRNet | Ours |
---|---|---|---|---|
For this section, we used the code provided by https://github.com/GuillermoCarbajal/NonUniformBlurKernelEstimation to estimate the kernel and we combine their kernel estimation to our super-resolution model. We also use the dataset provided by "Laurent D’Andrès, Jordi Salvador, Axel Kochale, and Sabine Süsstrunk. Non-parametric blur map regression for depth of field extension".
LR | SwinIR | BlindSR | Ours |
---|---|---|---|
LR | DMPHN | RealBlur | MPRNet | Ours |
---|---|---|---|---|
The codes use KAIR as base. Please also follow their licenses. I would like to thank them for the amazing repository.
If you use our work, please cite us with the following:
@InProceedings{laroche2023dmbsr,
title = {Deep Model-Based Super-Resolution with Non-Uniform Blur},
author = {Laroche, Charles and Almansa, Andrés and Tassano, Matias},
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}
year = {2023}
}