M.Naseer Subhani and Mohsen Ali
This repo contains implementation of paper titled "Learning from Scale-Invariant Examples for Domain Adaptation in Semantic Segmentation" introduced in ECCV2020
If you use this code in your research, please cite us.
@article{subhani2020learning,
title={Learning from Scale-Invariant Examples for Domain Adaptation in Semantic Segmentation},
author={Subhani, M Naseer and Ali, Mohsen},
journal={arXiv preprint arXiv:2007.14449},
year={2020}
}
- Ubuntu 16.04 with NVIDIA Tesla K80 GPU.
- PyTorch 1.0.0
- Python 3.5
The root directory supposed to be "LSE/".
a. Datasets:
- Download GTA5 dataset.
- Download Cityscapes.
- Download SYNTHIA-RAND-CITYSCAPES. Make sure to change class id similar to cityscapes. or download synthia labels from this Link
- Put all datasets to "dataset/" folder.
b. Pretrained Initial Source Models:
-
Download all pretrained models and put in "init_models/"
a. Run pip3 install -r requirements.txt.
b. Change root directory in init.py from utils folder.
c. Training:
- GTA_to_Cityscapes without Focal Loss:
python3.5 LSE.py --model VGG --source gta5 --gamma 3 --beta 0.1 --focal-loss False --batch-size 1
- GTA_to_Cityscapes with Focal Loss:
python3.5 LSE.py --model VGG --source gta5 --gamma 3 --beta 0.1 --focal-loss True --batch-size 1
- SYNTHIA_to_Cityscapes without Focal Loss:
python3.5 LSE.py --model VGG --source synthia --gamma 3 --beta 0.1 --focal-loss False --batch-size 1
- SYNTHIA_to_Cityscapes with Focal Loss:
python3.5 LSE.py --model VGG --source synthia --gamma 3 --beta 0.1 --focal-loss True --batch-size 1
d. Evaluation:
python3.5 eval.py --model VGG --model-name #model file name in .pth from snapshot folder#
Increase the batch size as per your hardware requirements. Running algorithm with different initial conditions and parameters can vary the results.
M.Naseer Subhani : msee16021@itu.edu.pk