Home

Awesome

<div align="center"> <h1> Integrative Few-Shot Learning <br> for Classification and Segmentation</h1> </div> <div align="center"> <h3><a href=http://dahyun-kang.github.io>Dahyun Kang</a> &nbsp;&nbsp;&nbsp;&nbsp; <a href=http://cvlab.postech.ac.kr/~mcho/>Minsu Cho</a></h3> </div> <br /> <div align="center"> <a href="https://arxiv.org/abs/2203.15712"><img src="https://img.shields.io/badge/arXiv-2203.15712-b31b1b.svg"/></a> <a href="http://cvlab.postech.ac.kr/research/iFSL"><img src="https://img.shields.io/static/v1?label=project homepage&message=iFSL&color=9cf"/></a> </div> <br /> <div align="center"> <img src="fs-cs/data/assets/teaser.png" alt="result" width="600"/> </div>

This repo is the official implementation of the CVPR 2022 paper: Integrative Few-Shot Learning for Classification and Segmentation.

:scroll: BibTex source

If you find our code or paper useful, please consider citing our paper:

@inproceedings{kang2022ifsl,
  author   = {Kang, Dahyun and Cho, Minsu},
  title    = {Integrative Few-Shot Learning for Classification and Segmentation},
  booktitle= {Proceedings of the {IEEE/CVF} Conference on Computer Vision and Pattern Recognition (CVPR)},
  year     = {2022}
}

:gear: Conda environmnet installation

This project is built upon the following environment:

The package requirements can be installed via environment.yml, which includes

conda env create --name ifsl_pytorch1.7.0 --file environment.yml -p YOURCONDADIR/envs/ifsl_pytorch1.7.0
conda activate ifsl_pytorch1.7.0

Make sure to replace YOURCONDADIR in the installation path with your conda dir, e.g., ~/anaconda3

:books: Datasets

Download the datasets by following the file structure below and set args.datapath=YOUR_DATASET_DIR:

    YOUR_DATASET_DIR/
    ├── VOC2012/
    │   ├── Annotations/
    │   ├── JPEGImages/
    │   ├── ...
    ├── COCO2014/
    │   ├── annotations/
    │   ├── train2014/
    │   ├── val2014/
    │   ├── ...
    ├── ...

We follow the dataset protocol of HSNet and PFENet.

:mag: Related repos

Our project refers to and heavily borrows some the codes from the following repos:

:bow: Acknowledgements

This work was supported by Samsung Advanced Institute of Technology (SAIT) and also by Center for Applied Research in Artificial Intelligence (CARAI) grant funded by DAPA and ADD (UD190031RD). We also thank Ahyun Seo and Deunsol Jung for their helpful discussion.