By Yuqi Zhang
This repo is an improvement of "Deep Multi-Model Fusion for Single-Image Dehazing" (ICCV 2019), written by Zijun Deng at the South China University of Technology. The original repo can be found at here.
| Dataset | O-HAZE | HazeRD | ||||||
|---|---|---|---|---|---|---|---|---|
| Method | PSNR | SSIM | MSE | CIEDE | PSNR | SSIM | MSE | CIEDE |
| DM2F-Net | 25.113 | 0.7742 | 0.0032 | 5.2000 | 14.212 | 0.8145 | 0.0724 | 16.8331 |
| DM2F-Net-improved | 25.602 | 0.7752 | 0.0030 | 4.9714 | 15.774 | 0.8291 | 0.0589 | 15.5905 |
The checkpoint and dehazing results can be found at Baidu Drive.
Make sure you have Python>=3.7 installed on your machine.
Environment setup:
-
Create conda environment
conda create -n dm2f conda activate dm2f -
Install dependencies (test with PyTorch 2.3.0):
-
Install pytorch==2.3.0 torchvision==0.18.0
-
Install other dependencies
pip install -r requirements.txt
-
-
Prepare the dataset
-
Download the RESIDE dataset from the official webpage.
-
Download the O-Haze dataset from the official webpage.
-
Make a directory
./dataand create a symbolic link for uncompressed data, e.g.,./data/RESIDE.
-
- Set the path of datasets in tools/config.py
- Run by
python train.py
Use pretrained ResNeXt (resnext101_32x8d) from torchvision.
Hyper-parameters of training were set at the top of train.py, and you can conveniently change them as you need.
Training a model on a single NVIDIA A100 GPU takes about 4 hours.
- Set the path of five benchmark datasets in tools/config.py.
- Put the trained model in
./ckpt/. - Run by
python test.py
Settings of testing were set at the top of test.py, and you can conveniently
change them as you need.
DM2F-Net-improved is released under the MIT license.
If you find the code helpful to your research, please give a star to me!