Awesome
<div align="center"> <h1>Relational Embedding for Few-Shot Classification <br> (ICCV 2021)</h1> </div> <div align="center"> <h3><a href=http://dahyun-kang.github.io>Dahyun Kang</a>, <a href=http://arunos728.github.io>Heeseung Kwon</a>, <a href=http://juhongm999.github.io>Juhong Min</a>, <a href=http://cvlab.postech.ac.kr/~mcho/>Minsu Cho</a></h3> </div> <div align="center"> <h4> <a href=https://arxiv.org/abs/2108.09666>[paper]</a>, <a href=http://cvlab.postech.ac.kr/research/RENet>[project hompage]</a></h4> </div>:heavy_check_mark: Requirements
- Ubuntu 16.04
- Python 3.7
- CUDA 11.0
- PyTorch 1.7.1
:gear: Conda environmnet installation
conda env create --name renet_iccv21 --file environment.yml
conda activate renet_iccv21
:books: Datasets
cd datasets
bash download_miniimagenet.sh
bash download_cub.sh
bash download_cifar_fs.sh
bash download_tieredimagenet.sh
:deciduous_tree: Authors' checkpoints
cd checkpoints
bash download_checkpoints_renet.sh
The file structure should be as follows:
renet/
├── datasets/
├── model/
├── scripts/
├── checkpoints/
│ ├── cifar_fs/
│ ├── cub/
│ ├── miniimagenet/
│ └── tieredimagenet/
train.py
test.py
README.md
environment.yml
:pushpin: Quick start: testing scripts
To test in the 5-way K-shot setting:
bash scripts/test/{dataset_name}_5wKs.sh
For example, to test ReNet on the miniImagenet dataset in the 5-way 1-shot setting:
bash scripts/test/miniimagenet_5w1s.sh
:fire: Training scripts
To train in the 5-way K-shot setting:
bash scripts/train/{dataset_name}_5wKs.sh
For example, to train ReNet on the CUB dataset in the 5-way 1-shot setting:
bash scripts/train/cub_5w1s.sh
Training & testing a 5-way 1-shot model on the CUB dataset using a TitanRTX 3090 GPU takes 41m 30s.
:art: Few-shot classification results
Experimental results on few-shot classification datasets with ResNet-12 backbone. We report average results with 2,000 randomly sampled episodes.
<table> <tr> <td>datasets</td> <td colspan="2" align="center">miniImageNet</td> <td colspan="2" align="center">tieredImageNet</td> </tr> <tr> <td>setups</td> <td>5-way 1-shot</td> <td>5-way 5-shot</td> <td>5-way 1-shot</td> <td>5-way 5-shot</td> </tr> <tr> <td>accuracy</td> <td align="center">67.60</td> <td align="center">82.58</td> <td align="center">71.61</td> <td align="center">85.28</td> </tr> </table> <table> <tr> <td>datasets</td> <td colspan="2" align="center">CUB-200-2011</td> <td colspan="2" align="center">CIFAR-FS</td> </tr> <tr> <td>setups</td> <td>5-way 1-shot</td> <td>5-way 5-shot</td> <td>5-way 1-shot</td> <td>5-way 5-shot</td> </tr> <tr> <td>accuracy</td> <td align="center">79.49</td> <td align="center">91.11</td> <td align="center">74.51</td> <td align="center">86.60</td> </tr> </table>:mag: Related repos
Our project references the codes in the following repos:
- Zhang et al., DeepEMD.
- Ye et al., FEAT
- Wang et al., Non-local neural networks
- Ramachandran et al., Stand-alone self-attention
- Huang et al., DCCNet
- Yang et al., VCN
:love_letter: Acknowledgement
We adopted the main code bases from DeepEMD, and we really appreciate it :smiley:. We also sincerely thank all the ICCV reviewers, especially R#2, for valuable suggestions.
:scroll: Citing RENet
If you find our code or paper useful to your research work, please consider citing our work using the following bibtex:
@inproceedings{kang2021renet,
author = {Kang, Dahyun and Kwon, Heeseung and Min, Juhong and Cho, Minsu},
title = {Relational Embedding for Few-Shot Classification},
booktitle= {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
year = {2021}
}