Home

Awesome

<img src="img/logo.png" style="vertical-align: -10px;" :height="40px" width="40px"> SAM2Long

This repository is the official implementation of SAM2Long.

<!-- <img align="center" src="img/pipeline.png" style=" display: block; margin-left: auto; margin-right: auto; width: 100%;" /> --> <p align="center" style="font-size: em; margin-top: 0.5em">

License: CC BY-NC 4.0<br> <a href="https://arxiv.org/abs/2410.16268"><img src="https://img.shields.io/badge/arXiv-paper-<color>"></a> <a href="https://mark12ding.github.io/project/SAM2Long/"><img src="https://img.shields.io/badge/Project-Homepage-green"></a> <a href="https://mark12ding.github.io/project/SAM2Long/asset/images/paper.pdf"><img src="https://img.shields.io/badge/PDF-red"></a> <a href="https://huggingface.co/papers/2410.16268"><img src="https://img.shields.io/badge/πŸ€—Hugging face-yellow"></a>

</p>

SAM2Long: Enhancing SAM 2 for Long Video Segmentation with a Training-Free Memory Tree<br> Shuangrui Ding, Rui Qian, Xiaoyi Dong, Pan Zhang<br> Yuhang Zang, Yuhang Cao, Yuwei Guo, Dahua Lin, Jiaqi Wang<br> CUHK, Shanghai AI Lab

πŸ’‘ Highlights

πŸ”₯ Enhanced Capability in Long-Term Video Segmentation

SAM2Long significantly improves upon SAM 2 by addressing error accumulation issue, particularly in challenging long-term video scenarios involving object occlusion and reappearance. With SAM2Long, the segmentation process becomes more resilient and accurate over time, maintaining strong performance even as objects are occluded or reappear in the video stream.

<img align="center" src="img/teaser.png" style=" display: block; margin-left: auto; margin-right: auto; width: 100%;" />

⚑️ A Simple Training-free Memory Tree

SAM2Long introduces a training-free memory tree that effectively reduces the risk of error propagation over time. By maintaining diverse segmentation hypotheses and dynamically pruning less optimal paths as the video progresses, this approach enhances segmentation without the need for additional parameters or further training. It maximizes the potential of SAM 2 to deliver better results in complex video scenarios.

🀯 Superior Performance Compared to SAM 2

SAM2Long pushes the performance limits of SAM 2 even further across various video object segmentation benchmarks, especially achieving an average improvement of 3 in J & F scores across all 24 head-to-head comparisons on long-term video datasets like SA-V and LVOS.

πŸš€ Main Results

SAM 2.1 checkpoints

The table below provides a one-to-one comparison between SAM 2 and SAM2Long using the improved SAM 2.1 checkpoints.

MethodBackboneSA-V val (J & F)SA-V test (J & F)LVOS v2 (J & F)
SAM 2Tiny73.574.677.8
SAM2LongTiny77.078.781.4
SAM 2Small73.074.679.7
SAM2LongSmall77.778.183.2
SAM 2Base+75.474.680.2
SAM2LongBase+78.478.582.3
SAM 2Large76.375.583.0
SAM2LongLarge80.880.885.2

SAM 2 checkpoints

The table below provides a one-to-one comparison between SAM 2 and SAM2Long using the SAM 2 checkpoints.

MethodBackboneSA-V val (J & F)SA-V test (J & F)LVOS v2 (J & F)
SAM 2Tiny75.176.381.6
SAM2LongTiny78.979.082.4
SAM 2Small76.976.982.1
SAM2LongSmall79.680.484.3
SAM 2Base+78.077.783.1
SAM2LongBase+80.580.885.2
SAM 2Large78.679.684.0
SAM2LongLarge81.181.285.3

πŸ› οΈ Usage

Installation

Please follow the instruction of official SAM 2 repo. If you encounter issues running the code, it's recommended to create a new environment specifically for SAM2Long instead of sharing it with SAM2. For further details, please check this issue here.

Download Checkpoints

All the model checkpoints can be downloaded by running:

bash
cd checkpoints && \
./download_ckpts.sh && \
cd ..

Inference

The inference instruction is in INFERENCE.md.

Evaluation

The evaluation code can be found here.

To evaluate performance on seen and unseen categories in the LVOS dataset, refer to the evaluation code available here.

☎️ Contact

Shuangrui Ding: mark12ding@gmail.com

πŸ”’ License

The majority of this project is released under the CC-BY-NC 4.0 license as found in the LICENSE file. The original SAM 2 model checkpoints and SAM 2 training code are licensed under Apache 2.0.

πŸ‘ Acknowledgements

I would like to thank Yixuan Wang for his assistance with dataset preparation and Haohang Xu for his insightful disscusion.

This project is built upon SAM 2 and the format of this README is inspired by VideoMAE.

βœ’οΈ Citation

If you find our work helpful for your research, please consider giving a star ⭐ and citation πŸ“.

@article{ding2024sam2long,
        title={SAM2Long: Enhancing SAM 2 for Long Video Segmentation with a Training-Free Memory Tree},
        author={Ding, Shuangrui and Qian, Rui and Dong, Xiaoyi and Zhang, Pan and Zang, Yuhang and Cao, Yuhang and Guo, Yuwei and Lin, Dahua and Wang, Jiaqi},
        journal={arXiv preprint arXiv:2410.16268},
        year={2024}
      }