Home

Awesome

[EMNLP 2024] RWKV-CLIP: A Robust Vision-Language Representation Learner

RWKV-CLIP: A Robust Vision-Language Representation Learner <br> <a href="https://github.com/GaryGuTC">Tiancheng Gu</a>,</span> <a href="https://kaicheng-yang0828.github.io">Kaicheng Yang</a>,</span> <a href="https://github.com/anxiangsir">Xiang An</a>,</span> Ziyong Feng,</span> <a href="https://scholar.google.com/citations?user=JZzb8XUAAAAJ&hl=zh-CN">Dongnan Liu</a>,</span> <a href="https://weidong-tom-cai.github.io/">Weidong Cai</a>,</span> <a href="https://jiankangdeng.github.io">Jiankang Deng</a></span>

📣 News

💡 Highlights

We introduce a diverse description generation framework that can leverage Large Language Models(LLMs) to synthesize and refine content from web-based texts, synthetic captions, and detection tags. Beneficial form detection tags, more semantic information can be introduced from images, which in turn further constrains LLMs and mitigates hallucinations.

teaser

We propose RWKV-CLIP, the first RWKV-driven vision-language representation learning model that combines the effective parallel training of transformers with the efficient inference of RNNs.

teaser

🎨 In-Progress

Environment installation

conda create -n rwkv_clip python=3.10 -y
conda activate rwkv_clip

pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 --index-url https://download.pytorch.org/whl/cu118
pip install -U openmim
mim install mmcv-full==1.7.2
pip install -r requirements.txt

Usage

git clone https://github.com/deepglint/RWKV-CLIP.git
cd RWKV-CLIP
import os
import clip
import json
import torch
import warnings
from PIL import Image
from torch.nn import functional as F
from open_clip.transform import image_transform
from model_config.utils_notebook import load_model_configs
warnings.filterwarnings('ignore')
args = load_model_configs('model_config/RWKV_CLIP_B32.json') # model_config/RWKV_CLIP_B16.json
from model.utils import create_RWKV_Model

transform = image_transform(args.input_size, False)
device = "cuda" if torch.cuda.is_available() else "cpu"

# Transfer to input type
image = transform(Image.open("figure/Diverse_description_generation_00.png")).unsqueeze(0).to(device) 
text = clip.tokenize(["a diagram", "a dog", "a cat"]).to(device)

# Load model
RWKV_CLIP_model = create_RWKV_Model(args, model_weight_path = "Model_pretrained_weight.pt").to(device)
RWKV_CLIP_model.eval()

# Calculate score
with torch.no_grad():
    image_features, text_features, logit_scale = RWKV_CLIP_model(image, text)
    image_features /= image_features.norm(dim=-1, keepdim=True)
    text_features /= text_features.norm(dim=-1, keepdim=True)
    text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
print("Label probs: ", text_probs.tolist()) # Label probs: [[1., 0., 0.]]

Instruction Dataset

Download YFCC15M

Generate rec files

Pretrained Model Weights

ModelDatasetDownload
RWKV-CLIP-B/32YFCC15M🤗ckpt | cfg
RWKV-CLIP-B/32LAION10M🤗ckpt | cfg
RWKV-CLIP-B/16LAION10M🤗ckpt | cfg
RWKV-CLIP-B/32LAION30M🤗ckpt | cfg

Training

bash shell/train_RWKV_CLIP_B32_YFCC15M.sh

Evaluation

Evaluate zero shot cross-modal retireval

bash shell/test_zero_shot_retrieval.sh

Evaluate zero shot classification

bash shell/test_zero_shot_classificaiton.sh

Results

Acknowledgements

This project is based on RWKV, VisionRWKV, RAM++, LLaMA-Factory, vllm, OFA, and open_clip, thanks for their works.

License

This project is released under the MIT license. Please see the LICENSE file for more information.

Dataset Contributors

This project would not have been possible without the invaluable contributions of the following individuals, who have been instrumental in data scraping and collection:
Thank you to all the contributors for their hard work and dedication!

ContributorEmial
Bin Qinskyqin@gmail.com
Lan Wubah-wl@hotmail.com
Haiqiang Jianghaiqiangjiang@deepglint.com
Yuling Wuyulingwu@deepglint.com

📖 Citation

If you find this repository useful, please use the following BibTeX entry for citation.

@misc{gu2024rwkvclip,
      title={RWKV-CLIP: A Robust Vision-Language Representation Learner}, 
      author={Tiancheng Gu and Kaicheng Yang and Xiang An and Ziyong Feng and Dongnan Liu and Weidong Cai and Jiankang Deng},
      year={2024},
      eprint={2406.06973},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

🌟Star History

Star History Chart