Awesome
<div align="center"> <h1> SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference </h1> <h5 align="center">Yuan Zhang<sup>1,3* </sup>, Chun-Kai Fan<sup>1*</sup>, Junpeng Ma<sup>2*</sup>, Wenzhao Zheng<sup>3✉️</sup>, Tao Huang<sup>4</sup>, Kuan Cheng<sup>1</sup>,
Denis Gudovskiy<sup>5</sup>, Tomoyuki Okuno<sup>5</sup>, Yohei Nakata<sup>5</sup>, Kurt Keutzer<sup>3</sup>, Shanghang Zhang<sup>1✉️</sup>
<sup>1</sup>School of Computer Science, Peking University, <sup>2</sup>Fudan University,
<sup>3</sup>UC Berkeley, <sup>4</sup>The University of Sydney, <sup>5</sup>Panasonic Holdings Corporation
</h5> </div>📜 News
🔥 [2024/10/15] We released SparseVLM and its Project Page! The Code is now open-source!
<p align='center'> <img src='./assests/archi.png' alt='mask' width='700px'> </p>✒️ Contents
👀 Overview
In vision-language models (VLMs), visual tokens usually consume a significant amount of computational overhead, despite their sparser information density compared to text tokens. To address this, existing methods extract more compact image representations by modifying the image encoder or projector. While some recent works further sparsify vision tokens during the decoding, they still ignore the guidance from the language tokens, which contradicts the multimodality paradigm. We argue that visual tokens should be sparsified adaptively based on the question prompt, as the model might focus on different parts (e.g., foreground or background) when dealing with various questions, as shown in Figure below. Unlike previous methods with text-agnostic visual sparsification (c) e.g., recent FastV, our SparseVLM (b) is guided by question prompts to select relevant visual patches.
<div align=center> <img width="600" alt="image" src="./assests/moti.png"> </div>👨💻 Preparation
- Clone this repository and navigate to SparseVLMs folder
git clone https://github.com/Gumpest/SparseVLMs.git
cd SparseVLMs
- Install necessary package
conda create -n SparseVLMs python=3.10 -y
conda activate SparseVLMs
pip install -e .
- Download Multimodal Benchmark
Please follow the detailed instruction in LLaVA-Evaluation.
🎯 Usage
Specifically, --sparse
in script indicates whether to perform sparseness, while --scale
and --bias
control the degree of token sparsity.
- Example for evaluating MME results (192 tokens, scale = 13.5, bias = 0.0):
CUDA_VISIBLE_DEVICES=0 bash scripts/v1_5/eval/mme.sh
- Example for evaluating POPE results (128 tokens, scale = 9, bias = 6):
CUDA_VISIBLE_DEVICES=0 bash scripts/v1_5/eval/pope.sh
- Example for evaluating TextVQA results (64 tokens, scale = 0.8, bias = 0.0):
CUDA_VISIBLE_DEVICES=0 bash scripts/v1_5/eval/textvqa.sh
License
This project is released under the Apache 2.0 license.
Citation
If you use SparseVLM in your research, please cite our work by using the following BibTeX entry:
@article{zhang2024sparsevlm,
title={SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference},
author={Zhang, Yuan and Fan, Chun-Kai and Ma, Junpeng and Zheng, Wenzhao and Huang, Tao and Cheng, Kuan and Gudovskiy, Denis and Okuno, Tomoyuki and Nakata, Yohei and Keutzer, Kurt and others},
journal={arXiv preprint arXiv:2410.04417},
year={2024}
}
Acknowledgment
We extend our gratitude to the open-source efforts of TCFormer, LLaVA, MiniGemini and VideoLLaVA.