Skip to content

Scienceseb/Learning-of-Image-Dehazing-Models-for-Segmentation-Tasks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Learning-of-Image-Dehazing-Models-for-Segmentation-Tasks

PyTorch code for the paper Learning of Image Dehazing Models for Segmentation Tasks, EUSIPCO 2019 (https://arxiv.org/pdf/1903.01530.pdf)

Approach:

The generator network receives an image with haze as an input and gives a candidate of a dehazed image as the output. Similar to the single image dehazing model, the generator loss is computed through LGAN , Lpixel, Lpercep and Lseg. . LGAN is the loss function from Isola et al. used to generate fake images. Lpixel is the reconstruction loss between the ground truth for dehazing (a.k.a. the real image) and the fake dehazed image, based on their individual pixel values, allowing the network to produce crisper images. Lpercep is the perceptual loss used for preserving important semantic elements of the image in the output of the generator. The segmentation loss Lseg, is computed by placing the output of the generator (i.e., the dehazed image) into the segmentation network. The obtained segmentation map is then compared to the ground truth segmentation map, using the L2 loss. Basically, the model tries at the same time to remove haze as much as possible while preserving, or even improving segmentation performance.

What this paper propose ?

This paper demonstrates the usefulness of including segmentation loss in an end-to-end training of deep learning models for dehazing. The learning-based dehazing model is generated not just for denoising metrics, but also with an optimization criterion aimed at achieving something useful for a specific task, and with performance improvements that can be significant in comparison to results obtained with an unguided approach. Moreover we can consider to boost even more the performance of DFS using directly an approximation of the IoU/iIoU measures for gradient descent, which are better optimization measure than mean square error and similar.

How to run train_DFS.py ?

  1. Train a segmentation network on the normal Cityscape dataset (not overlapping with Foggy cityscape, you have to check the ID of images): https://www.cityscapes-dataset.com/.
  2. Follow the procedure to make the Foggy Cityscape dataset (https://people.ee.ethz.ch/~csakarid/SFSU_synthetic/).
  3. Make a folder called cityscape, the path to that folder is your "path_exp", make 3 sub-folders: train_set, val_set and test_set and 3 sub-sub-folders for each: a,b and c. Put the hazy images in a, the non-hazy images in b and the segmentation masks in c (make sure that you separated the dataset, no overlapping between train, val and test).
  4. Change the "path_exp" in train_DFS.py to your real experimentation path.
  5. It's done, just run train_DFS.py.

Poster:

Poster

About

PyTorch code for the paper Learning of Image Dehazing Models for Segmentation Tasks

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages