Home

Awesome

V2M: Visual 2-Dimensional Mamba for Image Representation Learning

This repository is the official implementation of V2M: Visual 2-Dimensional Mamba for Image Representation Learning

Paper

V2M: Visual 2-Dimensional Mamba for Image Representation Learning

Chengkun Wang, Wenzhao Zheng, Yuanhui Huang, Jie Zhou, Jiwen Lu

Motivation of V2M

Alt text Previous vision Mambas processed image tokens with 1D SSM, whereas we extend SSM to a 2D form for more suitable image representation learning by introducing the prior of enhancing the relevance of adjacent regions for modeling.

Overall framework of V2M

Alt text

Environments of training

Train Your V2M

bash v2m/scripts/tiny.sh

bash v2m/scripts/small.sh

The above code trains V2M based on Vim. Application to other vision mamabs only requires transferring the calculation part of SSM to other frameworks.

Results

Alt text

Acknowledgement

This project is based on Vision Mamba (code), Mamba (code), Causal-Conv1d (code), DeiT (code). Thanks for their wonderful works.

Citation

If you find this project helpful, please consider citing the following paper:

@article{wang2024V2M,
    title={V2M: Visual 2-Dimensional Mamba for Image Representation Learning},
    author={Chengkun Wang and Wenzhao Zheng and Yuanhui Huang and Jie Zhou and Jiwen Lu},
    journal={arXiv preprint arXiv:2410.10382},
    year={2024}
}