Paper - BoMuDANet: Unsupervised Adaptation for Visual Scene Understanding in Unstructured Driving Environments
Project Page - https://gamma.umd.edu/researchdirections/autonomousdriving/bomuda/
Watch the video here
Please cite our paper if you find it useful.
@article{kothandaraman2020bomuda,
title={BoMuDA: Boundless Multi-Source Domain Adaptive Segmentation in Unconstrained Environments},
author={Kothandaraman, Divya and Chandra, Rohan and Manocha, Dinesh},
journal={arXiv preprint arXiv:2010.03523},
year={2020}
}
- Paper - BoMuDANet: Unsupervised Adaptation for Visual Scene Understanding in Unstructured Driving Environments
- Repo Details and Contents
- Our network
- Acknowledgements
Python version: 3.7
Dataset | Dataloader | List of images |
---|---|---|
CityScapes | dataset/cityscapes.py | dataset/cityscapes_list |
India Driving Dataset | dataset/idd_dataset.py,idd_openset.py | dataset/idd_list |
GTA | dataset/gta_dataset.py | dataset/gta_list |
SynScapes | dataset/synscapes.py | dataset/synscapes_list |
Berkeley Deep Drive | dataset/bdd/bdd_source.py | dataset/bdd_list |
Stage 1: Train networks for single source domain adaptation on various source-target pairs.
python train_singlesourceDA.py
Stage 2: Use the trained single-source networks, and the corresponding domain discriminators for multi-source domain adaptation.
python train_bddbase_multi3source_furtheriterations.py
Evaluation (closed-set DA):
python eval_idd_BoMuDA.py
Evaluation (open-set DA):
python eval_idd_openset.py
Make sure to set appropriate paths to the folders containing the datasets, and the models in the training and evaluation files.
PyTorch
NumPy
SciPy
Matplotlib
This code is heavily borrowed from AdaptSegNet.