Awesome
LPLD (Low-confidence Pseudo Label Distillation) (ECCV 2024)
<a href="https://arxiv.org/abs/2407.13524"> <img src="https://img.shields.io/badge/arXiv-2407.13524-0096c7.svg?style=plastic" alt="arXiv"> </a>We are currently working on refactoring all the original code. Please wait for the final version. However, you can run the example code using the instructions below.
This is an official code implementation repository for Enhancing Source-Free Domain Adaptive Object Detection with Low-confidence Pseudo Label Distillation
, accepted to ECCV 2024
.
Installation and Environmental settings (Instructions)
- We use Python 3.6 and Pytorch 1.9.0
- The codebase from Detectron2.
git clone https://github.com/junia3/LPLD.git
conda create -n LPLD python=3.6
conda activate LPLD
conda install pytorch==1.9.0 torchvision==0.10.0 torchaudio==0.9.0 cudatoolkit=10.2 -c pytorch
cd LPLD
pip install -r requirements.txt
## Make sure you have GCC and G++ version <=8.0
cd ..
python -m pip install -e LPLD
Dataset preparation
- Cityscapes, FoggyCityscapes / Download Webpage / Google drive (preprocessed)
- PASCAL_VOC / Download Webpage
- Clipart / Download Webpage / Google drive (preprocessed)
- Watercolor / Download Webpage / Google drive (preprocessed)
- Sim10k / Download Webpage
Make sure that all downloaded datasets are located in the ./dataset
folder. After preparing the datasets, you will have the following file structure:
LPLD
...
├── dataset
│ └── foggy
│ └── cityscape
│ └── clipart
│ └── watercolor
...
Make sure that all dataset fit the format of PASCAL_VOC. For example, the dataset foggy is stored as follows:
$ cd ./dataset/foggy/VOC2007/
$ ls
Annotations ImageSets JPEGImages
$ cat ImageSets/Main/test_t.txt
target_munster_000157_000019_leftImg8bit_foggy_beta_0.02
target_munster_000124_000019_leftImg8bit_foggy_beta_0.02
target_munster_000110_000019_leftImg8bit_foggy_beta_0.02
.
.
Execution
Currently, we only provide code and results with ResNet-50 backbone baselines. We are planning to add VGG-16 backbone baselines and code.
Test models
CUDA_VISIBLE_DEVICES=$GPU_ID python tools/test_main.py --eval-only \
--config-file configs/sfda/sfda_city2foggy.yaml --model-dir $WEIGHT_LOCATION
Visualize
We provide visualization code. We use our trained model to detect foggy cityscapes in the example image
.
Pretrained weights (LPLD)
Source | Target | Download Link |
---|---|---|
Cityscapes | FoggyCityscapes | TBD |
Kitti | Cityscapes | TBD |
Sim10k | Cityscapes | TBD |
Pascal VOC | Watercolor | TBD |
Pascal VOC | Clipart | TBD |
Citation
@article{yoon2024enhancing,
title={Enhancing Source-Free Domain Adaptive Object Detection with Low-confidence Pseudo Label Distillation},
author={Yoon, Ilhoon and Kwon, Hyeongjun and Kim, Jin and Park, Junyoung and Jang, Hyunsung and Sohn, Kwanghoon},
journal={arXiv preprint arXiv:2407.13524},
year={2024}
}