Home

Awesome

<div align="center"> <h1>Spatial-Mamba: Effective Visual State Space Models via Structure-Aware State Fusion</h1> <div> <a href='https://github.com/EdwardChasel' target='_blank'>Chaodong Xiao<sup>1,2,*</sup></a>, <a href='https://scholar.google.com/citations?user=LhdBgMAAAAAJ' target='_blank'>Minghan Li<sup>1,3,*</sup></a>, <a href='https://scholar.google.com.hk/citations?hl=zh-CN&user=UX26wSMAAAAJ' target='_blank'>Zhengqiang Zhang<sup>1,2</sup></a>, <a href='https://gr.xjtu.edu.cn/en/web/dymeng/1' target='_blank'>Deyu Meng<sup>4</sup></a>, <a href='https://www4.comp.polyu.edu.hk/~cslzhang/' target='_blank'>Lei Zhang<sup>1,2,ā€  </sup></a> </div> <div> <sup>1</sup>The Hong Kong Polytechnic University, <sup>2</sup>OPPO Research Institute,<br><sup>3</sup>Harvard Medical School, <sup>4</sup>Xi'an Jiaotong University </div> <div> (*) equal contribution, (ā€ ) corresponding author </div>

[šŸ“ arXiv paper]


</div>

Abstaract

Selective state space models (SSMs), such as Mamba, highly excel at capturing long-range dependencies in 1D sequential data, while their applications to 2D vision tasks still face challenges. Current visual SSMs often convert images into 1D sequences and employ various scanning patterns to incorporate local spatial dependencies. However, these methods are limited in effectively capturing the complex image spatial structures and the increased computational cost caused by the lengthened scanning paths. To address these limitations, we propose Spatial-Mamba, a novel approach that establishes neighborhood connectivity directly in the state space. Instead of relying solely on sequential state transitions, we introduce a structure-aware state fusion equation, which leverages dilated convolutions to capture image spatial structural dependencies, significantly enhancing the flow of visual contextual information. Spatial-Mamba proceeds in three stages: initial state computation in a unidirectional scan, spatial context acquisition through structure-aware state fusion, and final state computation using the observation equation. Our theoretical analysis shows that Spatial-Mamba unifies the original Mamba and linear attention under the same matrix multiplication framework, providing a deeper understanding of our method. Experimental results demonstrate that Spatial-Mamba, even with a single scan, attains or surpasses the state-of-the-art SSM-based models in image classification, detection and segmentation.

šŸŽ¬ Overview

<p align="center"> <img src="assets/main.png" alt="main" width="80%"> </p> <p align="center"> <img src="assets/sasf.png" alt="sasf" width="80%"> </p>

šŸŽÆ Main Results

<p align="center"> <img src="assets/classification.png" alt="classification" width="80%"> </p> <p align="center"> <img src="assets/detection.png" alt="detection" width="80%"> </p> <p align="center"> <img src="assets/segmentation.png" alt="segmentation" width="80%"> </p>

šŸ› ļø Getting Started

  1. Clone repo

    git clone https://github.com/EdwardChasel/Spatial-Mamba.git
    cd Spatial-Mamba
    
  2. Create and activate a new conda environment

    conda create -n Spatial-Mamba python=3.10
    conda activate Spatial-Mamba
    
  3. Install dependent packages

    pip install --upgrade pip
    pip install -r requirements.txt
    cd kernels/selective_scan && pip install .
    cd kernels/dwconv2d && python3 setup.py install --user
    
  4. Dependencies for detection and segmentation (optional)

    pip install mmengine==0.10.1 mmcv==2.1.0 opencv-python-headless ftfy regex
    pip install mmdet==3.3.0 mmsegmentation==1.2.2 mmpretrain==1.2.0
    

āœØ Pre-trained Models

<details> <summary> ImageNet-1k Image Classification </summary> <br> <div>
namepretrainresolutionacc@1#paramFLOPsdownload
Spatial-Mamba-TImageNet-1K224x22483.527M4.5Gckpt | config
Spatial-Mamba-SImageNet-1K224x22484.643M7.1Gckpt | config
Spatial-Mamba-BImageNet-1K224x22485.396M15.8Gckpt | config
</div> </details> <details> <summary> COCO Object Detection and Instance Segmentation </summary> <br> <div>
backbonemethodschedulemAP (box/mask)#paramFLOPsdownload
Spatial-Mamba-TMask R-CNN1x47.6 / 42.946M261Gckpt | config
Spatial-Mamba-SMask R-CNN1x49.2 / 44.063M315Gckpt | config
Spatial-Mamba-BMask R-CNN1x50.4 / 45.1115M494Gckpt | config
Spatial-Mamba-TMask R-CNN3x49.3 / 43.646M261Gckpt | config
Spatial-Mamba-SMask R-CNN3x50.5 / 44.663M315Gckpt | config
</div> </details> <details> <summary> ADE20K Semantic Segmentation </summary> <br> <div>
backbonemethodresolutionmIoU (ss/ms)#paramFLOPsdownload
Spatial-Mamba-TUPerNet512x51248.6 / 49.457M936Gckpt | config
Spatial-Mamba-SUPerNet512x51250.6 / 51.473M992Gckpt | config
Spatial-Mamba-BUPerNet512x51251.8 / 52.6127M1176Gckpt | config
</div> </details>

šŸ“š Data Preparation

šŸš€ Quick Start

šŸ–Šļø Citation

@article{xiao2024spatialmamba,
  title={Spatial-Mamba: Effective Visual State Space Models via Structure-Aware State Fusion},
  author={Chaodong Xiao and Minghan Li and Zhengqiang Zhang and Deyu Meng and Lei Zhang},
  journal={arXiv preprint arXiv:2410.15091},
  year={2024}
}

šŸ’Œ Acknowledgments

This project is largely based on Mamba, VMamba, MLLA, Swin-Transformer, RepLKNet and OpenMMLab. We are truly grateful for their excellent work.

šŸŽ« License

This project is released under the Apache 2.0 license.