Awesome
MTMamba
This repository contains codes and models for the following papers:
Baijiong Lin, Weisen Jiang, Pengguang Chen, Yu Zhang, Shu Liu, and Ying-Cong Chen. MTMamba: Enhancing Multi-Task Dense Scene Understanding by Mamba-Based Decoders. In European Conference on Computer Vision, 2024.
Baijiong Lin, Weisen Jiang, Pengguang Chen, Shu Liu, and Ying-Cong Chen. MTMamba++: Enhancing Multi-Task Dense Scene Understanding via Mamba-Based Decoders. arXiv preprint arXiv:2408.15101, 2024.
Requirements
-
PyTorch 2.0.0
-
timm 0.9.16
-
mmsegmentation 1.2.2
-
mamba-ssm 1.1.2
-
CUDA 11.8
Usage
-
Prepare the pretrained Swin-Large checkpoint by running the following command
cd pretrained_ckpts bash run.sh cd ../
-
Download the data from PASCALContext.tar.gz, NYUDv2.tar.gz, and then extract them. You need to modify the dataset directory as
db_root
variable inconfigs/mypath.py
. -
Train the model. Taking training NYUDv2 as an example, you can run the following command
python -m torch.distributed.launch --nproc_per_node 8 main.py --run_mode train --config_exp ./configs/mtmamba_nyud.yml
You can download the pretrained models from mtmamba_nyud.pth.tar, mtmamba_pascal.pth.tar, mtmamba_plus_nyud.pth.tar, mtmamba_plus_pascal.pth.tar.
-
Evaluation. You can run the following command,
python -m torch.distributed.launch --nproc_per_node 1 main.py --run_mode infer --config_exp ./configs/mtmamba_nyud.yml --trained_model ./ckpts/mtmamba_nyud.pth.tar
Acknowledgement
We would like to thank the authors that release the public repositories: Multi-Task-Transformer, mamba, and VMamba.
Citation
If you found this code/work to be useful in your own research, please cite the following:
@inproceedings{lin2024mtmamba,
title={{MTMamba}: Enhancing Multi-Task Dense Scene Understanding by Mamba-Based Decoders},
author={Lin, Baijiong and Jiang, Weisen and Chen, Pengguang and Zhang, Yu and Liu, Shu and Chen, Ying-Cong},
booktitle={European Conference on Computer Vision},
year={2024}
}
@article{lin2024mtmambaplus,
title={{MTMamba++}: Enhancing Multi-Task Dense Scene Understanding via Mamba-Based Decoders},
author={Lin, Baijiong and Jiang, Weisen and Chen, Pengguang and Liu, Shu and Chen, Ying-Cong},
journal={arXiv preprint arXiv:2408.15101},
year={2024}
}