Home

Awesome

<div align="center"> <h2>What Makes Good Examples for Visual In-Context Learning?</h2> <div> <a href='https://zhangyuanhan-ai.github.io/' target='_blank'>Zhang Yuanhan</a>&emsp; <a href='https://kaiyangzhou.github.io/' target='_blank'>Zhou Kaiyang</a>&emsp; <a href='https://liuziwei7.github.io/' target='_blank'>Liu Ziwei</a> </div> <div> S-Lab, Nanyang Technological University </div> <img src="figures/motivation.png"> <h3>TL;DR</h3>

We study on the effect of in-context examples in computer vision. We propose a Prompt Retrieval framework to automatically select examples, consisting of an unsupervised (UnsupPR) and a supervised method (SupPR).


<p align="center"> <a href="https://arxiv.org/abs/2301.13670" target='_blank'>[arXiv]</a> </p> </div>

Updatas

[01/2023] arXiv paper has been released.

[01/2023] The code for foureground segmentation has been released.

Environment setup

conda create -n XXX python=3.8
conda activate XXX
pip install -r requirements.txt

Data preparation

Our data preparation pipeline is based on visual prompt. Please follow the dataset preparation steps for PASCAL-5i dataset in this repository.

How to run

Click the Unsup/Sup stratedgy below to see the detailed instructions on how to run the code to reproduce the results.

Performance

Here, Random is the baseline method in visual prompt, SupPR and UnsupPR are shorted for supervised prompt retrieval and unsupervised prompt retrieval respectively.

fig1

Model

The SupPR models for each pascal-5<sup>i</sup> split is uploaded in link.

Citation

If you use this code in your research, please kindly cite this work.

@inproceedings{zhang2023VisualPromptRetrieval,
      title={What Makes Good Examples for Visual In-Context Learning?}, 
      author={Yuanhan Zhang and Kaiyang Zhou and Ziwei Liu},
      year={2023},
      archivePrefix={arXiv},
}

Acknowledgments

Part of the code is borrowed from visual prompt, SupContrast, timm and mmcv.

<div align="center">

Hits

</div>