Skip to content

IJCAI 2023 accepted paper for unsupervised nighttime semantic segmentation

Notifications You must be signed in to change notification settings

chenghaoDong666/ICDA

Repository files navigation


ICDA: Illumination-Coupled Domain Adaptation Framework for Unsupervised Nighttime Semantic Segmentation

This repository provides the official code for the IJCAI 2023 paper ICDA: Illumination-Coupled Domain Adaptation Framework for Unsupervised Nighttime Semantic Segmentation. The code is organized using PyTorch Lightning.

Abstract

The performance of nighttime semantic segmen- tation has been significantly improved thanks to recent unsupervised methods. However, these methods still suffer from complex domain gaps, i.e., the challenging illumination gap and the in-herent dataset gap. In this paper, we propose the illumination-coupled domain adaptation frame-work(ICDA) to effectively avoid the illumination gap and mitigate the dataset gap by coupling day-time and nighttime images as a whole with seman-tic relevance. Specifically, we first design a new composite enhancement method(CEM) that con-siders not only illumination but also spatial con-sistency to construct the source and target do-main pairs, which provides the basic adaptation unit for our ICDA. Next, to avoid the illumina-tion gap, we devise the Deformable Attention Rel-evance(DAR) module to capture the semantic rel-evance inside each domain pair, which can cou-ple the daytime and nighttime images at the fea-ture level and adaptively guide the predictions of nighttime images. Besides, to mitigate the dataset gap and acquire domain-invariant semantic relevance, we propose the Prototype-based Class Alignment(PCA) module, which improves the us-age of category information and performs fine-grained alignment. Extensive experiments show that our method reduces the complex domain gaps and achieves state-of-the-art performance for night-time semantic segmentation.

Usage

Requirements

The code is run with Python 3.8.13. To install the packages, use:

pip install -r requirements.txt

Set Data Directory

The following environment variable must be set:

export DATA_DIR=/path/to/data/dir

Download the Data

Before running the code, download and extract the corresponding datasets to the directory $DATA_DIR.

UDA

Cityscapes

Download leftImg8bit_trainvaltest.zip and gt_trainvaltest.zip from here and extract them to $DATA_DIR/Cityscapes.

$DATA_DIR
├── Cityscapes
│   ├── leftImg8bit
│   │   ├── train
│   │   ├── val
│   ├── gtFine
│   │   ├── train
│   │   ├── val
├── ...

Afterwards, run the preparation script:

python tools/convert_cityscapes.py $DATA_DIR/Cityscapes
ACDC

Download rgb_anon_trainvaltest.zip and gt_trainval.zip from here and extract them to $DATA_DIR/ACDC.

$DATA_DIR
├── ACDC
│   ├── rgb_anon
│   │   ├── fog
│   │   ├── night
│   │   ├── rain
│   │   ├── snow
│   ├── gt
│   │   ├── fog
│   │   ├── night
│   │   ├── rain
│   │   ├── snow
├── ...
Dark Zurich

Download Dark_Zurich_train_anon.zip, Dark_Zurich_val_anon.zip, and Dark_Zurich_test_anon_withoutGt.zip from here and extract them to $DATA_DIR/DarkZurich.

$DATA_DIR
├── DarkZurich
│   ├── rgb_anon
│   │   ├── train
│   │   ├── val
│   │   ├── val_ref
│   │   ├── test
│   │   ├── test_ref
│   ├── gt
│   │   ├── val
├── ...
Nighttime Driving

Download NighttimeDrivingTest.zip from here and extract it to $DATA_DIR/NighttimeDrivingTest.

$DATA_DIR
├── NighttimeDrivingTest
│   ├── leftImg8bit
│   │   ├── test
│   ├── gtCoarse_daytime_trainvaltest
│   │   ├── test
├── ...
BDD100k-night

Download 10k Images and Segmentation from here and extract them to $DATA_DIR/bdd100k.

$DATA_DIR
├── bdd100k
│   ├── images
│   │   ├── 10k
│   ├── labels
│   │   ├── sem_seg
├── ...

Image Enhancement

Illumination Enhancement Images

Download Illumination Enhancement Images from here and extract them to $DATA_DIR/CycleGANCityscapes.

$DATA_DIR
├── CycleGANCityscapes
│   ├── leftImg8bit
│   │   ├── train
│   │   ├── val
│   ├── gtFine
│   │   ├── train
│   │   ├── val
├── ...

Pretrained Models

We provide pretrained models for the UDA tasks.

UDA

Qualitative ICDA Predictions

To facilitate qualitative comparisons, test set predictions of ICDA can be directly downloaded:

ICDA Training

To train ICDA on Dark Zurich (single GPU, with AMP) use the following command:

python tools/run.py fit --config configs/cityscapes_darkzurich/ICDA_daformer.yaml --trainer.gpus [0] --trainer.precision 16

ICDA Testing

To evaluate ICDA e.g. on the Dark Zurich validation set, use the following command:

python tools/run.py validate --config configs/cityscapes_darkzurich/ICDA_daformer.yaml --ckpt_path /path/to/trained/model --trainer.gpus [0]

We also provide pretrained models, which can be downloaded from the link above. To evaluate them, simply provide them as the argument --ckpt_path.

To get test set scores for DarkZurich, predictions are evaluated on the respective evaluation servers: DarkZurich. To create and save test predictions, use this command:

python tools/run.py predict --config configs/cityscapes_darkzurich/ICDA_daformer.yaml --ckpt_path /path/to/trained/model --trainer.gpus [0]

Citation

If you find this code useful in your research, please consider citing the paper:

@inproceedings{dong2023icda,
  title={ICDA: Illumination-Coupled Domain Adaptation Framework for Unsupervised Nighttime Semantic Segmentation.},
  author={Dong, Chenghao and Kang, Xuejing and Ming, Anlong},
  booktitle={IJCAI},
  pages={672--680},
  year={2023}
}

Credit

The pretrained backbone weights and code are from MMSegmentation. DAFormer code is from the original repo. Our work is implemented with reference to Refign, thanks for their great work.

Contact

For questions about the code or paper, feel free to contact me (send email).

About

IJCAI 2023 accepted paper for unsupervised nighttime semantic segmentation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published