Home

Awesome

SoC4SS-FGVC

This repo is the official Pytorch implementation of our paper:

Roll with the Punches: Expansion and Shrinkage of Soft Label Selection for Semi-supervised Fine-Grained Learning
Authors: Yue Duan, Zhen Zhao, Lei Qi, Luping Zhou, Lei Wang and Yinghuan Shi

Requirements

How to Train

Important Args

Training with Single GPU

python train_soc.py --rank 0 --gpu [0/1/...] @@@other args@@@

Training with Multi-GPUs

python train_soc.py --world-size 1 --rank 0 @@@other args@@@
python train_soc.py --world-size 1 --rank 0 --multiprocessing-distributed @@@other args@@@

Examples of Running

To better reproduce our experimental results, it is recommended to use multi-GPUs with DataParallel for training.

Using In-distribuion Unlabeleda Data

Training from scratch

python train_soc.py --world-size 1 --rank 0 --seed 1 --num_eval_iter 2000 --overwrite --save_name aves_in_sc --dataset semi_aves --num_classes 200 --unlabel in 

Training from scratch with MoCo

python train_soc.py --world-size 1 --rank 0 --seed 1 --num_eval_iter 1000 --overwrite --save_name aves_in_sc_moco --dataset semi_aves --num_classes 200 --unlabel in --resume --load_path @path to MoCo pre-trained model@ --pretrained --num_train_iter 200000

Training from expert or expert with MoCo

python train_soc.py --world-size 1 --rank 0 --seed 1 --num_eval_iter 500 --overwrite --save_name aves_in_pre --dataset semi_aves --num_classes 200 --unlabel in --resume --load_path @path to pre-trained model@ --pretrained --lr 0.001 --num_train_iter 50000

The expert models and MoCo models can be obtained here (provided by https://github.com/cvl-umass/ssl-evaluation).

Using Out-of-Distribuion Unlabeleda Data

Training from scratch

python train_soc.py --world-size 1 --rank 0 --seed 1 --num_eval_iter 2000 --overwrite --save_name aves_inout_sc --dataset semi_aves --num_classes 200 --unlabel inout 

Evaluation

Each time you start training, the evaluation results of the current model will be displayed. If you want to evaluate a model, use its checkpoints to resume training, i.e., use --resume --load_path @path to your checkpoint@.

Results (e.g. seed=1)

DatesetUnlabeled DataPre-trainingTop1-Acc (%)Top5-Acc (%)Checkpoint
Semi-Avesin-distribution-32.355.5here
MoCo39.562.5here
ImageNet56.879.1here
ImageNet + MoCo57.179.1here
iNat71.088.4here
iNat + MoCo70.288.3here
out-of-distribution-27.550.7here
MoCo40.465.9here
Semi-Fungiin-distribution-38.5061.35here
MoCo46.971.4here
ImageNet61.183.2here
ImageNet + MoCo61.885.9here
iNat62.385.0here
iNat + MoCo62.284.4here
out-of-distribution-35.660.6here
MoCo50.074.8here

Citaion

Please cite our paper if you find SoC useful:

@inproceedings{duan2024roll,
  title={Roll with the Punches: Expansion and Shrinkage of Soft Label Selection for Semi-supervised Fine-Grained Learning},
  author={Duan, Yue and Zhao, Zhen and Qi, Lei and Zhou, Luping and Wang, Lei and Shi, Yinghuan},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={38},
  number={10},
  pages={11829--11837},
  year={2024}
}

or

@article{duan2023roll,
  title={Roll With the Punches: Expansion and Shrinkage of Soft Label Selection for Semi-supervised Fine-Grained Learning},
  author={Duan, Yue and Zhao, Zhen and Qi, Lei and Zhou, Luping and Wang, Lei and Shi, Yinghuan},
  journal={arXiv preprint arXiv:2312.12237},
  year={2023}
}