Home

Awesome

Visual Prompt Tuning

https://arxiv.org/abs/2203.12119


This repository contains the official PyTorch implementation for Visual Prompt Tuning.

vpt_teaser

Environment settings

See env_setup.sh

Structure of the this repo (key files are marked with πŸ‘‰):

Experiments

Key configs:

Datasets preperation:

See Table 8 in the Appendix for dataset details.

Pre-trained model preperation

Download and place the pre-trained Transformer-based backbones to MODEL.MODEL_ROOT (ConvNeXt-Base and ResNet50 would be automatically downloaded via the links in the code). Note that you also need to rename the downloaded ViT-B/16 ckpt from ViT-B_16.npz to imagenet21k_ViT-B_16.npz.

See Table 9 in the Appendix for more details about pre-trained backbones.

<table><tbody> <!-- START TABLE --> <!-- TABLE HEADER --> <th valign="bottom">Pre-trained Backbone</th> <th valign="bottom">Pre-trained Objective</th> <th valign="bottom">Link</th> <th valign="bottom">md5sum</th> <!-- TABLE BODY --> <tr><td align="left">ViT-B/16</td> <td align="center">Supervised</td> <td align="center"><a href="https://storage.googleapis.com/vit_models/imagenet21k/ViT-B_16.npz">link</a></td> <td align="center"><tt>d9715d</tt></td> </tr> <tr><td align="left">ViT-B/16</td> <td align="center">MoCo v3</td> <td align="center"><a href="https://dl.fbaipublicfiles.com/moco-v3/vit-b-300ep/linear-vit-b-300ep.pth.tar">link</a></td> <td align="center"><tt>8f39ce</tt></td> </tr> <tr><td align="left">ViT-B/16</td> <td align="center">MAE</td> <td align="center"><a href="https://dl.fbaipublicfiles.com/mae/pretrain/mae_pretrain_vit_base.pth">link</a></td> <td align="center"><tt>8cad7c</tt></td> </tr> <tr><td align="left">Swin-B</td> <td align="center">Supervised</td> <td align="center"><a href="https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window7_224_22k.pth">link</a></td> <td align="center"><tt>bf9cc1</tt></td> </tr> <tr><td align="left">ConvNeXt-Base</td> <td align="center">Supervised</td> <td align="center"><a href="https://dl.fbaipublicfiles.com/convnext/convnext_base_22k_224.pth">link</a></td> <td align="center"><tt>-</tt></td> </tr> <tr><td align="left">ResNet-50</td> <td align="center">Supervised</td> <td align="center"><a href="https://pytorch.org/vision/stable/models.html">link</a></td> <td align="center"><tt>-</tt></td> </tr> </tbody></table>

Examples for training and aggregating results

See demo.ipynb for how to use this repo.

Hyperparameters for experiments in paper

The hyperparameter values used (prompt length for VPT / reduction rate for Adapters, base learning rate, weight decay values) in Table 1-2, Fig. 3-4, Table 4-5 can be found here: Dropbox / Google Drive.

Citation

If you find our work helpful in your research, please cite it as:

@inproceedings{jia2022vpt,
  title={Visual Prompt Tuning},
  author={Jia, Menglin and Tang, Luming and Chen, Bor-Chun and Cardie, Claire and Belongie, Serge and Hariharan, Bharath and Lim, Ser-Nam},
  booktitle={European Conference on Computer Vision (ECCV)},
  year={2022}
}

License

The majority of VPT is licensed under the CC-BY-NC 4.0 license (see LICENSE for details). Portions of the project are available under separate license terms: GitHub - google-research/task_adaptation and huggingface/transformers are licensed under the Apache 2.0 license; Swin-Transformer, ConvNeXt and ViT-pytorch are licensed under the MIT license; and MoCo-v3 and MAE are licensed under the Attribution-NonCommercial 4.0 International license.