Home

Awesome

:star2: Sparse Spatial Transformers for Few-Shot Learning

This code implements the Sparse Spatial Transformers for Few-Shot Learning(SSFormers).

PWC PWC PWC PWC PWC PWC PWC PWC

:bookmark: Citation

If you find our work useful, please consider citing our work using the bibtex:

@Article{ssformers,
	author  = {Chen, Haoxing and Li, Huaxiong and Li, Yaohui and Chen, Chunlin},
	title   = {Sparse Spatial Transformers for Few-Shot Learning},
	journal = {Sci. China Inf. Sci.},
	year    = {2023},
}

:palm_tree: Prerequisites

:bookmark_tabs: Datasets

Dataset download link:

Pre-trained backbone

We provide pre-trained backbones at https://pan.baidu.com/s/1v2k-mdCpGLtKnKG5ijYXMw keys: 334q

:four_leaf_clover: Few-shot Classification

 python experiments/run_trainer.py  --cfg ./configs/miniImagenet/ST_N5K1_R12.yaml --device 0

Test model on the test set:

python experiments/run_evaluator.py --cfg ./configs/miniImagenet/ST_N5K1_R12.yaml -c ./checkpoint/*/*.pth --device 0

and semi-supervised few-shot learning tasks (with trial t=1).

python experiments/run_semi_trainer.py --cfg ./configs/miniImagenet/ST_N5K1_semi_with_extractor.yaml --device 0 -t 1

python experiments/run_semi_evaluator.py --cfg ./configs/miniImagenet/ST_N5K1_semi_with_extractor.yaml -c ./checkpoints/*/*.pth --device 0

Our code is based on MCL and FEAT.

:email: Contacts

Please feel free to contact us if you have any problems.

Email: haoxingchen@smail.nju.edu.cn