Home

Awesome

PTP

<!-- [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/position-guided-text-prompt-for-vision/zero-shot-cross-modal-retrieval-on-coco-2014)]( https://paperswithcode.com/sota/zero-shot-cross-modal-retrieval-on-coco-2014?metric=Text-to-image%20R%401) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/image-classification-on-imagenet)]( https://paperswithcode.com/sota/zero-shot-cross-modal-retrieval-on-coco-2014?metric=Text-to-image%20R%401) -->

PWC

PWC

PWC

This repository includes implementations of the following method:

Introduction

The goal of Position-guided Text Prompt (PTP) is to bring position information into conventional Vision-Language Pre-training (VLP) models, as current mainstream e2e VLP models ignore this important cues.

<p align="center"> <img src="imgs/motivation.jpg" width = "500" /> </p>

We observe Position information is missed in a well-trained ViLT models.

<!-- ![motivation](imgs/main.jpg) --> <p align="center"> <img src="imgs/main.jpg" /> </p>

Our method provide a good altentive for existing object feature based methods (BUTD and the following works).

Some examples of one PTP is show below:

<p align="center"> <img src="imgs/block_mask.png" /> </p>

Updates

Installation

Please find installation instructions for PyTorch in INSTALL.md.

Dataset Preparation

You may follow the instructions in DATASET.md to prepare the datasets. Considering the dataset prepartion is very time consuming, we provide detail guidence and provided our trained corpus.

Pretrained & Finetune Models

1. Pre-trained Model

MethodVision Encoder#ImagesDatasetPretrained WeightsTraining Logs
PTP-BLIPViT-B(DeiT)4MCC3M+COCO+VG+SBUlinklink

2. Zero-shot & Fine-tuning Downstream Model

2.1 Captioning

MethodB@4CIDErConfig
PTP-BLIP40.1135.0configs/caption_coco.yaml

2.2 Zero-shot Retrieval

<!-- ##### 2.2.1 COCO | Task | I2T@1 | T2I@1 | Model Weight | Training Logs | Config | | :--- | :--- | :--- | :--- | :--- | :---: | | Zero-shot Retrieval(COCO)| 72.3 | 49.5 | [link](https://huggingface.co/sail/PTP/blob/main/zero_shot_coco_checkpoint_4m.pth) | [link](https://huggingface.co/sail/PTP/blob/main/4M_ptp_coco_zero_shot.txt) | configs/retrieval_coco.yaml | -->
2.2.2 Flickr30K
MethodI2T@1T2I@1Model WeightTraining LogsConfig
PTP-BLIP86.467.0linklinkconfigs/retrieval_flickr.yaml

2.3 Retrieval (Fine-tune)

Tip: Please use as large batch size as possible, we experimentally find that the larger batch size leads to better result for this task. Due to memory limiation, we use batch size 24 rather than 28 in original implmentation.

2.3.1 COCO
MethodI2T@1T2I@1Config
PTP-BLIP77.659.4configs/retrieval_coco.yaml
2.3.2 Flickr30K
MethodI2T@1T2I@1Model WeightTraining LogsConfig
PTP-BLIP96.184.2linklinkconfigs/retrieval_flickr.yaml

2.4 VQA V2

MethodTest-devTest-stdModel WeightTraining LogsConfig
PTP-BLIP76.0276.18linklinkconfigs/vqa.yaml

2.5 NLVR

MethodDevTest-PModel WeightTraining LogsConfig
PTP-BLIP80.4580.70linklinkconfigs/nlvr.yaml

Quick Start

Follow the example in GETTING_STARTED.md to start playing vlp models with PTP.

Transfer To Other Architectures

The PTP can easily transfer to other architectures without much effort. Specifically, change your base code with following two steps:

Then train the model with original objectives.

Ackowledgement

This work is mainly based on BLIP and ViLT, thanks for these good baselines. We also refer OSCAR for ablation study and dataset preparation.

License

PTP is released under the Apache 2.0 license.

Contact

Email: awinyimgprocess at gmail dot com

If you have any questions, please email me or open an new issue.

Citation

If you find our work helps, please use the following BibTeX entry for citation.

@article{wang2022ptp,
  title={Position-guided Text Prompt for Vision Language Pre-training},
  author={Wang, Alex Jinpeng and Zhou, Pan and Shou, Mike Zheng and Yan, Shui Cheng},
  journal={https://arxiv.org/abs/2212.09737},
  year={2022}
}