Home

Awesome

A Closer Look at Few-shot Classification

This repo contains the reference source code for the paper A Closer Look at Few-shot Classification in International Conference on Learning Representations (ICLR 2019). In this project, we provide a integrated testbed for a detailed empirical study for few-shot classification.

Citation

If you find our code useful, please consider citing our work using the bibtex:

@inproceedings{
chen2019closerfewshot,
title={A Closer Look at Few-shot Classification},
author={Chen, Wei-Yu and Liu, Yen-Cheng and Kira, Zsolt and Wang, Yu-Chiang and  Huang, Jia-Bin},
booktitle={International Conference on Learning Representations},
year={2019}
}

Enviroment

Getting started

CUB

mini-ImageNet

(WARNING: This would download the 155G ImageNet dataset. You can comment out correponded line 5-6 in download_miniImagenet.sh if you already have one.)

mini-ImageNet->CUB (cross)

Omniglot

Omniglot->EMNIST (cross_char)

Self-defined setting

Train

Run python ./train.py --dataset [DATASETNAME] --model [BACKBONENAME] --method [METHODNAME] [--OPTIONARG]

For example, run python ./train.py --dataset miniImagenet --model Conv4 --method baseline --train_aug
Commands below follow this example, and please refer to io_utils.py for additional options.

Save features

Save the extracted feature before the classifaction layer to increase test speed. This is not applicable to MAML, but are required for other methods. Run python ./save_features.py --dataset miniImagenet --model Conv4 --method baseline --train_aug

Test

Run python ./test.py --dataset miniImagenet --model Conv4 --method baseline --train_aug

Results

References

Our testbed builds upon several existing publicly available code. Specifically, we have modified and integrated the following code into this project:

FAQ