Skip to content

Latest commit

 

History

History
85 lines (69 loc) · 2.89 KB

File metadata and controls

85 lines (69 loc) · 2.89 KB

Project Page - https://gamma.umd.edu/researchdirections/autonomousdriving/bomuda/

Watch the video here

Please cite our paper if you find it useful.

@article{kothandaraman2020bomuda,
  title={BoMuDA: Boundless Multi-Source Domain Adaptive Segmentation in Unconstrained Environments},
  author={Kothandaraman, Divya and Chandra, Rohan and Manocha, Dinesh},
  journal={arXiv preprint arXiv:2010.03523},
  year={2020}
}

Table of Contents

Repo Details and Contents

Python version: 3.7

Code structure

Dataloaders

Dataset Dataloader List of images
CityScapes dataset/cityscapes.py dataset/cityscapes_list
India Driving Dataset dataset/idd_dataset.py,idd_openset.py dataset/idd_list
GTA dataset/gta_dataset.py dataset/gta_list
SynScapes dataset/synscapes.py dataset/synscapes_list
Berkeley Deep Drive dataset/bdd/bdd_source.py dataset/bdd_list

Our network

Training your own model

Stage 1: Train networks for single source domain adaptation on various source-target pairs.

python train_singlesourceDA.py

Stage 2: Use the trained single-source networks, and the corresponding domain discriminators for multi-source domain adaptation.

python train_bddbase_multi3source_furtheriterations.py

Evaluation (closed-set DA):

python eval_idd_BoMuDA.py

Evaluation (open-set DA):

python eval_idd_openset.py

Make sure to set appropriate paths to the folders containing the datasets, and the models in the training and evaluation files.

Datasets

Dependencies

PyTorch
NumPy
SciPy
Matplotlib

Acknowledgements

This code is heavily borrowed from AdaptSegNet.