Home

Awesome

Paper - BoMuDANet: Unsupervised Adaptation for Visual Scene Understanding in Unstructured Driving Environments

Project Page - https://gamma.umd.edu/researchdirections/autonomousdriving/bomuda/

Watch the video here

Please cite our paper if you find it useful.

@article{kothandaraman2020bomuda,
  title={BoMuDA: Boundless Multi-Source Domain Adaptive Segmentation in Unconstrained Environments},
  author={Kothandaraman, Divya and Chandra, Rohan and Manocha, Dinesh},
  journal={arXiv preprint arXiv:2010.03523},
  year={2020}
}

Table of Contents

Repo Details and Contents

Python version: 3.7

Code structure

Dataloaders <br>

DatasetDataloaderList of images
CityScapesdataset/cityscapes.pydataset/cityscapes_list
India Driving Datasetdataset/idd_dataset.py,idd_openset.pydataset/idd_list
GTAdataset/gta_dataset.pydataset/gta_list
SynScapesdataset/synscapes.pydataset/synscapes_list
Berkeley Deep Drivedataset/bdd/bdd_source.pydataset/bdd_list

Our network

<p align="center"> <img src="ICCVW_Overview.png"> </p>

Training your own model

Stage 1: Train networks for single source domain adaptation on various source-target pairs. <br>

python train_singlesourceDA.py

Stage 2: Use the trained single-source networks, and the corresponding domain discriminators for multi-source domain adaptation.

python train_bddbase_multi3source_furtheriterations.py

Evaluation (closed-set DA):

python eval_idd_BoMuDA.py

Evaluation (open-set DA):

python eval_idd_openset.py

Make sure to set appropriate paths to the folders containing the datasets, and the models in the training and evaluation files. <br>

Datasets

Dependencies

PyTorch <br> NumPy <br> SciPy <br> Matplotlib <br>

Acknowledgements

This code is heavily borrowed from AdaptSegNet.