Home

Awesome

💀 HAMLET

To Adapt or Not to Adapt? Real-Time Adaptation for Semantic Segmentation (ICCV23)

Marc Botet Colomer<sup>1,2*</sup> Pier Luigi Dovesi<sup>3*†</sup> Theodoros Panagiotakopoulos<sup>4</sup> Joao Frederico Carvalho <sup>1</sup> Linus Härenstam-Nielsen<sup>5,6</sup> Hossein Azizpour<sup>2</sup> Hedvig Kjellström<sup>2,3</sup> Daniel Cremers<sup>5,6,7</sup> Matteo Poggi<sup>8</sup>

<sup>1</sup> Univrses <sup>2</sup> KTH <sup>3</sup> Silo AI <sup>4</sup> King <sup>5</sup> Technical University of Munich <sup>6</sup> Munich Center of Machine Learning <sup>7</sup> University of Oxford <sup>8</sup> University of Bologna

<sup>*</sup> Joint first authorship. <sup></sup> Part of the work carried out while at Univrses.

📜 arxiv 💀 project page 📽️ video

Method Cover

Citation

If you find this repo useful for your work, please cite our paper:

@inproceedings{colomer2023toadapt,
      title = {To Adapt or Not to Adapt? Real-Time Adaptation for Semantic Segmentation},
      author = {Botet Colomer, Marc and 
                Dovesi, Pier Luigi and 
                Panagiotakopoulos, Theodoros and 
                Carvalho, Joao Frederico and 
                H{\"a}renstam-Nielsen, Linus and 
                Azizpour, Hossein and 
                Kjellstr{\"o}m, Hedvig and 
                Cremers, Daniel and
                Poggi, Matteo},
      booktitle = {IEEE International Conference on Computer Vision},
      note = {ICCV},
      year = {2023}
}

Setup Environment

For this project, we used Python 3.9.13. We recommend setting up a new virtual environment:

python -m venv ~/venv/hamlet
source ~/venv/hamlet/bin/activate

In that environment, the requirements can be installed with:

pip install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html
pip install mmcv-full==1.3.7  # requires the other packages to be installed first

All experiments were executed on a NVIDIA RTX 3090

Setup Datasets

Cityscapes: Please, download leftImg8bit_trainvaltest.zip and gt_trainvaltest.zip from here and extract them to /data/datasets/cityscapes.

Rainy Cityscapes: Please follow the steps as shown here: https://team.inria.fr/rits/computer-vision/weather-augment/

If you have troubles creating the rainy dataset, please contact us in domain-adaptation-group@googlegroups.com to obtain the Rainy Cityscapes dataset

We refer to MMSegmentation for further instructions about the dataset structure.

Prepare the source dataset:

python tools/convert_datasets/cityscapes.py /data/datasets/Cityscapes --out-dir data/Cityscapes --nproc 8

Training

For convenience, it is possible to run the configuration by selecting experiment -1. If wandb is configurated, it can be activated by setting the wandb argument to 1

python run_experiments.py --exp -1 --wandb 1

All assets to run a training can be found here.

Make sure to place the pretrained model mitb1_uda.pth in pretrained/.

We provide a config.py file that can be easily modified to run multiple experiments by changing parameters. Make sure to place the random modules to random_modules/.

Code structure

This code is based on MMSegmentation project. The most relevant files are:

Acknowledgements

This project is based on the following open-source projects.