Awesome
Adapting CLIP For Phrase Localization Without Further Training
Jiahao Li, Greg Shakhnarovich, Raymond A. Yeh<br/> Toyota Technological Institute at Chicago (TTIC)<br/>
The repository contains Pytorch implementation of Adapting CLIP For Phrase Localization Without Further Training. If you used this code for your experiments or found it helpful, please consider citing the following paper:
<pre> @inproceedings{Li_ARXIV_2022, author = {Jiahao Li and Greg Shakhnarovich and Raymond A. Yeh}, title = {Adapting CLIP For Phrase Localization Without Further Training}, journal={arXiv preprint arXiv:2204.03647, year = {2022}, } </pre>Dependencies
Following CLIP's installation procedure
$ conda install --yes -c pytorch pytorch=1.7.1 torchvision cudatoolkit=11.0
$ pip install ftfy regex tqdm
$ pip install git+https://github.com/openai/CLIP.git
Specifically, we are using commit 40f5484.
Data
All of the data should be put in a data
directory in the root dir of the repo.
- Download the Flickr and VG images, to
data/flickr
anddata/vg
respectively. - Download Flickr30k Entities annotations using
cd data/flickr && git clone https://github.com/BryanPlummer/flickr30k_entities.git
. - Download the ZSG annotations from this link to
data/ds_csv_ann
:
After setting up, the data
dir should have the following structure
data
data
├── flickr
├── flickr30k_entities
├── Annotations
├── Sentences
├── test.txt
├── train.txt
└── val.txt
└── flickr30k_images
├── vg
├── VG_100K
└── VG_100K_2
└── ds_csv_ann
├── flickr30k
├── flickr30k_c0
├── flickr30k_c1
└──vg_split
Usage
To run evaluation on the ZSG dataset as reported in the paper, please refer to a full list of arguments in eval.py
to specify the dataset, architecture, etc.
For example, the following command runs ViT-L/14 architecture on the first 500 examples of the Flickr S1 validation set with an IoU threshold of 0.5.
python eval.py --model vit14 --dataset flickr_s1_val --iou_thr 0.5 --num_samples 500