Home

Awesome

<p align="center"> <h1 align="middle">EfficientViM</h1> </p> <p align="center"> <img src="assets/toy_mamba.png" width="300px" /> <h3 align="middle">EfficientViM: Efficient Vision Mamba with Hidden State Mixer-based State Space Duality</h2> <p align="middle"> <a href="https://www.sanghyeoklee.com/" target="_blank">Sanghyeok Lee</a>, <a href="https://scholar.google.com/citations?user=IaQRhu8AAAAJ&hl=ko" target="_blank">Joonmyung Choi</a>, <a href="https://hyunwoojkim.com/" target="_blank">Hyunwoo J. Kim</a>* </p> <!-- <p align="middle">NeurIPS 2024</p> --> <p align="middle"> <a href="https://arxiv.org/abs/2411.15241" target='_blank'><img src="https://img.shields.io/badge/arXiv-2411.15241-b31b1b.svg?logo=arxiv"></a> </p> </p>

This repository is an official implementation of EfficientViM: Efficient Vision Mamba with Hidden State Mixer-based State Space Duality.

TODO

Main Results

Comparison of efficient networks on ImageNet-1K classification.

The family of EfficientViM, marked as red and blue stars, shows the best speed-accuracy trade-offs. ($✝$: with distillation)

<div align="center"> <img src="assets/comparison.png" width="800px" /> </div>

Image classification on ImageNet-1K (pretrained models)

modelresolutionepochsacc#paramsFLOPscheckpoint
EfficientViM-M1224x22430072.96.7M239MEfficientViM_M1_e300.pth
EfficientViM-M1224x22445073.56.7M239MEfficientViM_M1_e450.pth
EfficientViM-M2224x22430075.413.9M355MEfficientViM_M2_e300.pth
EfficientViM-M2224x22445075.813.9M355MEfficientViM_M2_e450.pth
EfficientViM-M3224x22430077.616.6M656MEfficientViM_M3_e300.pth
EfficientViM-M3224x22445077.916.6M656MEfficientViM_M3_e450.pth
EfficientViM-M4256x25630079.419.6M1111MEfficientViM_M4_e300.pth
EfficientViM-M4256x25645079.619.6M1111MEfficientViM_M4_e450.pth

Image classification on ImageNet-1K with distillation

modelresolutionepochsacccheckpoint
EfficientViM-M1224x22430074.6EfficientViM_M1_dist.pth
EfficientViM-M2224x22430076.7EfficientViM_M2_dist.pth
EfficientViM-M3224x22430079.1EfficientViM_M3_dist.pth
EfficientViM-M4256x25630080.7EfficientViM_M4_dist.pth

Getting Started

Installation

# Clone this repository:
git clone https://github.com/mlvlab/EfficientViM.git
cd EfficientViM

# Create and activate the environment
conda create -n EfficientViM python==3.10
conda activate EfficientViM

# Install dependencies
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch
pip install -r requirements.txt

Training

To train EfficientViM for classification on ImageNet, run train.sh in classification:

cd classification
sh train.sh <num-gpus> <batch-size-per-gpu> <epochs> <model-name> <imagenet-path> <output-path>

For example, to train EfficientViM-M1 for 450 epochs using 8 GPU (with a total batch size of 2048 calculated as <num-gpus> $\times$ <batch-size-per-gpu>), run:

sh train.sh 8 256 450 EfficientViM_M1 <imagenet-path> <output-path>

Training with distillation

To train EfficientViM with distillation objective of DeiT, run train_dist.sh in classification:

sh train_dist.sh <num-gpus> <batch-size-per-gpu> <model-name> <imagenet-path> <output-path>

Evaluation

To evaluate a pre-trained EfficientViM, run test.sh in classification:

sh test.sh <num-gpus> <model-name> <imagenet-path> <checkpoint-path>
# For evaluation with the model trained with distillation
# sh test_dist.sh <num-gpus> <model-name> <imagenet-path> <checkpoint-path>

Acknowledgements

This repo is built upon Swin, VSSD, SHViT, EfficientViT, and SwiftFormer.
Thanks to the authors for their inspiring works!

Citation

If this work is helpful for your research, please consider citing it.

@article{EfficientViM,
  title={EfficientViM: Efficient Vision Mamba with Hidden State Mixer based State Space Duality},
  author={Lee, Sanghyeok and Choi, Joonmyung and Kim, Hyunwoo J.},
  journal={arXiv preprint arXiv:2411.15241},
  year={2024}
}