Home

Awesome

<div align="center"> <h1>SEGA: Semantic Guided Attention on Visual Prototype for Few-Shot Learning <br> (WACV 2022)</h1> </div> <div align="center"> <h3><a href=https://martayang.github.io/>Fengyuan Yang</a>, <a href=https://vipl.ict.ac.cn/homepage/rpwang/index.htm>Ruiping Wang</a>, <a href=http://people.ucas.ac.cn/~xlchen?language=en>Xilin Chen</a></h3> </div> <div align="center"> <h4> <a href=https://openaccess.thecvf.com/content/WACV2022/papers/Yang_SEGA_Semantic_Guided_Attention_on_Visual_Prototype_for_Few-Shot_Learning_WACV_2022_paper.pdf>[Paper link]</a>, <a href=https://openaccess.thecvf.com/content/WACV2022/supplemental/Yang_SEGA_Semantic_Guided_WACV_2022_supplemental.pdf>[Supp link]</a></h4> </div>

1. Requirements

2. Datasets

Note: the above datasets are the same as previous works (e.g. FewShotWithoutForgetting, DeepEMD) EXCEPT that we include additional semantic embeddings (GloVe word embeddings for the first 3 datasets and attributes embeddings for CUB-FS). Thus, remember to change the argparse arguments semantic_path in training and testing scripts.

3. Usage

Our training and testing scripts are all at scripts/ and are all in the form of jupyter notebook, where both the argparse arguments and output logs can be easily found.

Let's take training and testing paradigm on miniimagenet for example. For the 1st stage training, run all cells in scripts/01_miniimagenet_stage1.ipynb. And for the 2nd stage training and final testing, run all cells in scripts/01_miniimagenet_stage2_SEGA_5W1S.ipynb.

4. Results

The 1-shot and 5-shot classification results can be found in the corresponding jupyter notebooks.

5. Pre-trained Models

The pre-trained models for all 4 datasets after our first training stage can be downloaded from here.

Citation

If you find our paper or codes useful, please consider citing our paper:

@inproceedings{yang2022sega,
  title={SEGA: Semantic Guided Attention on Visual Prototype for Few-Shot Learning},
  author={Yang, Fengyuan and Wang, Ruiping and Chen, Xilin},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
  pages={1056--1066},
  year={2022}
}

Acknowledgments

Our codes are based on Dynamic Few-Shot Visual Learning without Forgetting and MetaOptNet, and we really appreciate it.

Further

If you have any question, feel free to contact me. My email is fengyuan.yang@vipl.ict.ac.cn