Home

Awesome

<div align="center"> <h1>Semantic Guided Latent Parts Embedding for Few-Shot Learning <br> (WACV 2023)</h1> </div> <div align="center"> <h3><a href=https://martayang.github.io/>Fengyuan Yang</a>, <a href=https://vipl.ict.ac.cn/homepage/rpwang/index.htm>Ruiping Wang</a>, <a href=http://people.ucas.ac.cn/~xlchen?language=en>Xilin Chen</a></h3> </div> <div align="center"> <h4> <a href=https://openaccess.thecvf.com/content/WACV2023/papers/Yang_Semantic_Guided_Latent_Parts_Embedding_for_Few-Shot_Learning_WACV_2023_paper.pdf>[Paper link]</a>, <a href=https://openaccess.thecvf.com/content/WACV2023/supplemental/Yang_Semantic_Guided_Latent_WACV_2023_supplemental.pdf>[Supp link]</a></h4> </div>

1. Requirements

2. Datasets

3. Usage

Our training and testing scripts are all at scripts/train.sh, and corresponding output logs can found at this folder too.

4. Results

The 1-shot and 5-shot classification results can be found in the corresponding output logs.

Citation

If you find our paper or codes useful, please consider citing our paper:

@InProceedings{Yang_2023_WACV,
    author    = {Yang, Fengyuan and Wang, Ruiping and Chen, Xilin},
    title     = {Semantic Guided Latent Parts Embedding for Few-Shot Learning},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
    month     = {January},
    year      = {2023},
    pages     = {5447-5457}
}

Acknowledgments

Our codes are based on renet and DeepEMD, and we really appreciate it.

Further

If you have any question, feel free to contact me. My email is fengyuan.yang@vipl.ict.ac.cn