Awesome
Vivim
Vivim: a Video Vision Mamba for Medical Video Segmentation
[arXiv]
News
- 24-08-01. Upload several example cases of VTUS dataset
- 24-03-11. โโUpdate on Code. Welcome to taste.๐
- 24-02-08. Update on Method and Experiments.
- 24-01-26. This project is still quickly updating ๐. Check TODO list to see what will be released next.
- 24-01-25. The paper has been released on arXiv.
A Quick Overview
<img width="600" height="400" src="https://github.com/scott-yjyang/Vivim/blob/main/assets/framework1.png">Environment Setup
Clone this repository and navigate to the root directory of the project.
git clone https://github.com/scott-yjyang/Vivim.git
cd Vivim
Install basic package
conda env create -f environment.yml
Install casual-conv1d
cd causal-conv1d
python setup.py install
Install mamba
cd mamba
python setup.py install
TODO LIST
- Release Model
- Release training scripts
- Release evaluation
- Release Ultrasound dataset
- Experiments on other video object segmentation datasets.
- configuration
Thanks
Code is based on hustvl/Vim, bowang-lab/U-Mamba.
Cite
If you find it useful, please cite and star
@article{yang2024vivim,
title={Vivim: a Video Vision Mamba for Medical Video Object Segmentation},
author={Yang, Yijun and Xing, Zhaohu and Zhu, Lei},
journal={arXiv preprint arXiv:2401.14168},
year={2024}
}