Home

Awesome

roboMamba

The repo of paper RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation robo-mamba-main

robo-mamba-main_00

Our main contributions are :

robo-mamba-main

Table 2: Comparison of the success rates between RoboMamba and baselines across various training (seen) and test (unseen) categories.

table2

installation

pip install -r requirements.txt

How to test

test: bash script/test.sh

Checkpoint

The checkpoints are shown in the test branch. Thank you very much for your interest in our work. If you need the training code, please send us an email and specify your research requirements.

📚 BibTeX

@inproceedings{liurobomamba,
  title={RoboMamba: Efficient Vision-Language-Action Model for Robotic Reasoning and Manipulation},
  author={Liu, Jiaming and Liu, Mengzhen and Wang, Zhenyu and An, Pengju and Li, Xiaoqi and Zhou, Kaichen and Yang, Senqiao and Zhang, Renrui and Guo, Yandong and Zhang, Shanghang},
  booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}
}