Home

Awesome

<div align="center"> <h1>VMamba </h1> <h3>VMamba: Visual State Space Model</h3>

Yue Liu<sup>1</sup>,Yunjie Tian<sup>1</sup>,Yuzhong Zhao<sup>1</sup>, Hongtian Yu<sup>1</sup>, Lingxi Xie<sup>2</sup>, Yaowei Wang<sup>3</sup>, Qixiang Ye<sup>1</sup>, Yunfan Liu<sup>1</sup>

<sup>1</sup> University of Chinese Academy of Sciences, <sup>2</sup> HUAWEI Inc., <sup>3</sup> PengCheng Lab.

Paper: (arXiv 2401.10166)

</div>

:white_check_mark: Updates

for details see detailed_updates.md

Abstract

Designing computationally efficient network architectures persists as an ongoing necessity in computer vision. In this paper, we transplant Mamba, a state-space language model, into VMamba, a vision backbone that works in linear time complexity. At the core of VMamba lies a stack of Visual State-Space (VSS) blocks with the 2D Selective Scan (SS2D) module. By traversing along four scanning routes, SS2D helps bridge the gap between the ordered nature of 1D selective scan and the non-sequential structure of 2D vision data, which facilitates the gathering of contextual information from various sources and perspectives. Based on the VSS blocks, we develop a family of VMamba architectures and accelerate them through a succession of architectural and implementation enhancements. Extensive experiments showcase VMamba’s promising performance across diverse visual perception tasks, highlighting its advantages in input scaling efficiency compared to existing benchmark models.

Overview

<p align="center"> <img src="assets/architecture.png" alt="architecture" width="80%"> </p> <p align="center"> <img src="assets/ss2d.png" alt="arch" width="80%"> </p> <p align="center"> <img src="assets/erf.png" alt="erf" width="80%"> </p> <p align="center"> <img src="assets/attn.png" alt="attn" width="80%"> </p> <p align="center"> <img src="assets/activation_map.png" alt="activation" width="80%"> </p>

Main Results

<!-- copied from assets/performance.md --> <!-- :book: --> <!-- ***The checkpoints of some of the models listed below will be released in weeks!*** -->

:book: For details see performance.md.

Classification on ImageNet-1K

namepretrainresolutionacc@1#paramsFLOPsTP.Train TP.configs/logs/ckpts
Swin-TImageNet-1K224x22481.228M4.5G1244987--
Swin-SImageNet-1K224x22483.250M8.7G718642--
Swin-BImageNet-1K224x22483.588M15.4G458496--
VMamba-S[s2l15]ImageNet-1K224x22483.650M8.7G877314config/log/ckpt
VMamba-B[s2l15]ImageNet-1K224x22483.989M15.4G646247config/log/ckpt
VMamba-T[s1l8]ImageNet-1K224x22482.630M4.9G1686571config/log/ckpt

Object Detection on COCO

Backbone#paramsFLOPsDetectorbboxAPbboxAP50bboxAP75segmAPsegmAP50segmAP75configs/logs/ckpts
Swin-T48M267GMaskRCNN@1x42.765.246.839.362.242.2--
Swin-S69M354GMaskRCNN@1x44.866.648.940.963.444.2--
Swin-B107M496GMaskRCNN@1x46.9----42.3------
VMamba-S[s2l15]70M384GMaskRCNN@1x48.770.053.443.767.347.0config/log/ckpt
VMamba-B[s2l15]108M485GMaskRCNN@1x49.271.454.044.168.347.7config/log/ckpt
VMamba-B[s2l15]108M485GMaskRCNN@1x[bs8]49.270.953.943.967.747.6config/log/ckpt
VMamba-T[s1l8]50M271GMaskRCNN@1x47.369.352.042.766.445.9config/log/ckpt
:---::---::---::---::---::---::---::---::---::---::---:
Swin-T48M267GMaskRCNN@3x46.068.150.341.665.144.9--
Swin-S69M354GMaskRCNN@3x48.269.852.843.267.046.1--
VMamba-S[s2l15]70M384GMaskRCNN@3x49.970.954.744.2068.247.7config/log/ckpt
VMamba-T[s1l8]50M271GMaskRCNN@3x48.870.453.5043.767.447.0config/log/ckpt

Semantic Segmentation on ADE20K

BackboneInput#paramsFLOPsSegmentormIoU(SS)mIoU(MS)configs/logs/logs(ms)/ckpts
Swin-T512x51260M945GUperNet@160k44.445.8--
Swin-S512x51281M1039GUperNet@160k47.649.5--
Swin-B512x512121M1188GUperNet@160k48.149.7--
VMamba-S[s2l15]512x51282M1028GUperNet@160k50.651.2config/log/log(ms)/ckpt
VMamba-B[s2l15]512x512122M1170GUperNet@160k51.051.6config/log/log(ms)/ckpt
VMamba-T[s1l8]512x51262M949GUperNet@160k47.948.8config/log/log(ms)/ckpt

Getting Started

Installation

Step 1: Clone the VMamba repository:

To get started, first clone the VMamba repository and navigate to the project directory:

git clone https://github.com/MzeroMiko/VMamba.git
cd VMamba

Step 2: Environment Setup:

VMamba recommends setting up a conda environment and installing dependencies via pip. Use the following commands to set up your environment: Also, We recommend using the pytorch>=2.0, cuda>=11.8. But lower version of pytorch and CUDA are also supported.

Create and activate a new conda environment

conda create -n vmamba
conda activate vmamba

Install Dependencies

pip install -r requirements.txt
cd kernels/selective_scan && pip install .
<!-- cd kernels/cross_scan && pip install . -->

Check Selective Scan (optional)

Dependencies for Detection and Segmentation (optional)

pip install mmengine==0.10.1 mmcv==2.1.0 opencv-python-headless ftfy regex
pip install mmdet==3.3.0 mmsegmentation==1.2.2 mmpretrain==1.2.0

Model Training and Inference

Classification

To train VMamba models for classification on ImageNet, use the following commands for different configurations:

python -m torch.distributed.launch --nnodes=1 --node_rank=0 --nproc_per_node=8 --master_addr="127.0.0.1" --master_port=29501 main.py --cfg </path/to/config> --batch-size 128 --data-path </path/of/dataset> --output /tmp

If you only want to test the performance (together with params and flops):

python -m torch.distributed.launch --nnodes=1 --node_rank=0 --nproc_per_node=1 --master_addr="127.0.0.1" --master_port=29501 main.py --cfg </path/to/config> --batch-size 128 --data-path </path/of/dataset> --output /tmp --pretrained </path/of/checkpoint>

please refer to modelcard for more details.

Detection and Segmentation

To evaluate with mmdetection or mmsegmentation:

bash ./tools/dist_test.sh </path/to/config> </path/to/checkpoint> 1

use --tta to get the mIoU(ms) in segmentation

To train with mmdetection or mmsegmentation:

bash ./tools/dist_train.sh </path/to/config> 8

For more information about detection and segmentation tasks, please refer to the manual of mmdetection and mmsegmentation. Remember to use the appropriate backbone configurations in the configs directory.

Analysis Tools

VMamba includes tools for visualizing mamba "attention" and effective receptive field, analysing throughput and train-throughput. Use the following commands to perform analysis:

# Visualize Mamba "Attention"
CUDA_VISIBLE_DEVICES=0 python analyze/attnmap.py

# Analyze the effective receptive field
CUDA_VISIBLE_DEVICES=0 python analyze/erf.py

# Analyze the throughput and train throughput
CUDA_VISIBLE_DEVICES=0 python analyze/tp.py

We also included other analysing tools that we may use in this project. Thanks to all who have contributes to these tools.

Star History

Star History Chart

Citation

@article{liu2024vmamba,
  title={VMamba: Visual State Space Model},
  author={Liu, Yue and Tian, Yunjie and Zhao, Yuzhong and Yu, Hongtian and Xie, Lingxi and Wang, Yaowei and Ye, Qixiang and Liu, Yunfan},
  journal={arXiv preprint arXiv:2401.10166},
  year={2024}
}

Acknowledgment

This project is based on Mamba (paper, code), Swin-Transformer (paper, code), ConvNeXt (paper, code), OpenMMLab, and the analyze/get_erf.py is adopted from replknet, thanks for their excellent works.