Home

Awesome

SAM-Med2D [Paper]

Open in OpenXLab </a> <a src="https://img.shields.io/badge/Data-SAMed2D_20M-blue?logo=red" href="https://openxlab.org.cn/datasets/GMAI/SA-Med2D-20M"> <img src="https://img.shields.io/badge/Data-SAMed2D_20M-blue?logo=red"> <a src="https://img.shields.io/badge/cs.CV-2308.16184-b31b1b?logo=arxiv&logoColor=red" href="https://arxiv.org/abs/2308.16184"> <img src="https://img.shields.io/badge/cs.CV-2308.16184-b31b1b?logo=arxiv&logoColor=red"> <a src="https://img.shields.io/badge/WeChat-Group-green?logo=wechat" href="https://github.com/OpenGVLab/SAM-Med2D/blob/main/assets/SAM-Med2D_wechat_group.jpeg"> <img src="https://img.shields.io/badge/WeChat-Group-green?logo=wechat"> </a> <a target="_blank" href="https://colab.research.google.com/github/OpenGVLab/SAM-Med2D/blob/main/predictor_example.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> GitHub StarsπŸ”₯πŸ”₯πŸ”₯

<!-- ## Description -->

🌀️ Highlights

πŸ”₯ Updates

πŸ‘‰ Dataset

SAM-Med2D is trained and tested on a dataset that includes 4.6M images and 19.7M masks. This dataset covers 10 medical data modalities, 4 anatomical structures + lesions, and 31 major human organs. To our knowledge, this is currently the largest and most diverse medical image segmentation dataset in terms of quantity and coverage of categories.

<p align="center"><img width="800" alt="image" src="https://github.com/OpenGVLab/SAM-Med2D/blob/main/assets/dataset.png"></p>

πŸ‘‰ Framework

The pipeline of SAM-Med2D. We freeze the image encoder and incorporate learnable adapter layers in each Transformer block to acquire domain-specific knowledge in the medical field. We fine-tune the prompt encoder using point, Bbox, and mask information, while updating the parameters of the mask decoder through interactive training.

<p align="center"><img width="800" alt="image" src="https://github.com/OpenGVLab/SAM-Med2D/blob/main/assets/framwork.png"></p>

πŸ‘‰ Results

<table> <caption align="center">Quantitative comparison of different methods on the test set: </caption> <thead> <tr> <th>Model</th> <th>Resolution</th> <th>Bbox (%)</th> <th>1 pt (%)</th> <th>3 pts (%)</th> <th>5 pts (%)</th> <th>FPS</th> <th>Checkpoint</th> </tr> </thead> <tbody> <tr> <td align="center">SAM</td> <td align="center">$256\times256$</td> <td align="center">61.63</td> <td align="center">18.94</td> <td align="center">28.28</td> <td align="center">37.47</td> <td align="center">51</td> <td align="center"><a href="https://drive.google.com/file/d/1_U26MIJhWnWVwmI5JkGg2cd2J6MvkqU-/view?usp=drive_link">Offical</a></td> </tr> <tr> <td align="center">SAM</td> <td align="center">$1024\times1024$</td> <td align="center">74.49</td> <td align="center">36.88</td> <td align="center">42.00</td> <td align="center">47.57</td> <td align="center">8</td> <td align="center"><a href="https://drive.google.com/file/d/1_U26MIJhWnWVwmI5JkGg2cd2J6MvkqU-/view?usp=drive_link">Offical</a></td> </tr> <tr> <td align="center">FT-SAM</td> <td align="center">$256\times256$</td> <td align="center">73.56</td> <td align="center">60.11</td> <td align="center">70.95</td> <td align="center">75.51</td> <td align="center">51</td> <td align="center"><a href="https://drive.google.com/file/d/1J4qQt9MZZYdv1eoxMTJ4FL8Fz65iUFM8/view?usp=drive_link">FT-SAM</a></td> </tr> <tr> <td align="center">SAM-Med2D</td> <td align="center">$256\times256$</td> <td align="center">79.30</td> <td align="center">70.01</td> <td align="center">76.35</td> <td align="center">78.68</td> <td align="center">35</td> <td align="center"><a href="https://drive.google.com/file/d/1ARiB5RkSsWmAB_8mqWnwDF8ZKTtFwsjl/view?usp=drive_link">SAM-Med2D</a></td> </tr> </tbody> </table>

η™ΎεΊ¦δΊ‘ι“ΎζŽ₯: https://pan.baidu.com/s/1HWo_s8O7r4iQI6irMYU8vQ?pwd=dk5x 提取码: dk5x

<table> <caption align="center">Generalization validation on 9 MICCAI2023 datasets, where "*" denotes that we drop adapter layer of SAM-Med2D in test phase: </caption> <thead> <tr> <th rowspan="2">Datasets</th> <th colspan="3">Bbox prompt (%)</th> <th colspan="3">1 point prompt (%)</th> </tr> <tr> <th>SAM</th> <th>SAM-Med2D*</th> <th>SAM-Med2D</th> <th>SAM</th> <th>SAM-Med2D*</th> <th>SAM-Med2D</th> </tr> </thead> <tbody> <tr> <td align="center"><a href="https://www.synapse.org/#!Synapse:syn51236108/wiki/621615">CrossMoDA23</a></td> <td align="center">78.12</td> <td align="center">86.26</td> <td align="center">88.42</td> <td align="center">33.84</td> <td align="center">65.85</td> <td align="center">85.26</td> </tr> <tr> <td align="center"><a href="https://kits-challenge.org/kits23/">KiTS23</a></td> <td align="center">81.52</td> <td align="center">86.14</td> <td align="center">89.89</td> <td align="center">31.36</td> <td align="center">56.67</td> <td align="center">83.71</td> </tr> <tr> <td align="center"><a href="https://codalab.lisn.upsaclay.fr/competitions/12239#learn_the_details">FLARE23</a></td> <td align="center">73.20</td> <td align="center">77.18</td> <td align="center">85.09</td> <td align="center">19.87</td> <td align="center">32.01</td> <td align="center">77.17</td> </tr> <tr> <td align="center"><a href="https://atlas-challenge.u-bourgogne.fr/">ATLAS2023</a></td> <td align="center">76.98</td> <td align="center">79.09</td> <td align="center">82.59</td> <td align="center">29.07</td> <td align="center">45.25</td> <td align="center">64.76</td> </tr> <tr> <td align="center"><a href="https://multicenteraorta.grand-challenge.org/">SEG2023</a></td> <td align="center">64.82</td> <td align="center">81.85</td> <td align="center">85.09</td> <td align="center">21.15</td> <td align="center">34.71</td> <td align="center">72.08</td> </tr> <tr> <td align="center"><a href="https://lnq2023.grand-challenge.org/lnq2023/">LNQ2023</a></td> <td align="center">53.02</td> <td align="center">57.37</td> <td align="center">58.01</td> <td align="center">7.05</td> <td align="center">7.21</td> <td align="center">37.64</td> </tr> <tr> <td align="center"><a href="https://codalab.lisn.upsaclay.fr/competitions/9804">CAS2023</a></td> <td align="center">61.53</td> <td align="center">78.20</td> <td align="center">81.10</td> <td align="center">22.75</td> <td align="center">46.85</td> <td align="center">78.46</td> </tr> <tr> <td align="center"><a href="https://tdsc-abus2023.grand-challenge.org/Dataset/">TDSC-ABUS2023</a></td> <td align="center">64.31</td> <td align="center">69.00</td> <td align="center">66.14</td> <td align="center">8.24</td> <td align="center">18.98</td> <td align="center">43.55</td> </tr> <tr> <td align="center"><a href="https://toothfairy.grand-challenge.org/toothfairy/">ToothFairy2023</a></td> <td align="center">43.40</td> <td align="center">39.13</td> <td align="center">41.23</td> <td align="center">5.47</td> <td align="center">5.27</td> <td align="center">12.93</td> </tr> <tr> <td align="center">Weighted sum</td> <td align="center">73.49</td> <td align="center">77.67</td> <td align="center">84.88</td> <td align="center">20.88</td> <td align="center">34.30</td> <td align="center">76.63</td> </tr> </tbody> </table>

Typos of paper

In our original paper, we acknowledge that there were anomalies in the test data presented in Table 4. We have conducted data updates for this project and corrected the values in Table 4. We assure the readers that our research team has recognized this issue and will update Table 4 in the next version. We apologize for any inconvenience this may have caused.

πŸ‘‰ Visualization

<p align="center"><img width="800" alt="image" src="https://github.com/OpenGVLab/SAM-Med2D/blob/main/assets/visualization.png"></p>

πŸ‘‰ Train

Prepare your own dataset and refer to the samples in SAM-Med2D/data_demo to replace them according to your specific scenario. You need to generate the image2label_train.json file before running train.py.

If you want to use mixed-precision training, please install Apex. If you don't want to install Apex, you can comment out the line from apex import amp and set use_amp to False.

cd ./SAM-Med2D
python train.py

πŸ‘‰ Test

Prepare your own dataset and refer to the samples in SAM-Med2D/data_demo to replace them according to your specific scenario. You need to generate the label2image_test.json file before running test.py.

cd ./SAM-Med2D
python test.py

πŸ‘‰ Deploy

Export to ONNX

python3 scripts/export_onnx_encoder_model.py --sam_checkpoint /path/to/sam-med2d_b.pth --output /path/to/sam-med2d_b.encoder.onnx --model-type vit_b --image_size 256 --encoder_adapter True
python3 scripts/export_onnx_model.py --checkpoint /path/to/sam-med2d_b.pth --output /path/to/sam-med2d_b.decoder.onnx --model-type vit_b --return-single-mask
# cd examples/SAM-Med2D-onnxruntime
python3 main.py --encoder_model /path/to/sam-med2d_b.encoder.onnx --decoder_model /path/to/sam-med2d_b.decoder.onnx

πŸš€ Try SAM-Med2D

πŸ—“οΈ Ongoing

🎫 License

This project is released under the Apache 2.0 license.

πŸ’¬ Discussion Group

If you have any questions about SAM-Med2D, please add this WeChat ID to the WeChat group discussion:

<p align="center"><img width="300" alt="image" src="https://github.com/OpenGVLab/SAM-Med2D/blob/main/assets/SAM_Med2D_wechat_group.png"></p>

🀝 Acknowledgement

πŸ‘‹ Hiring & Global Collaboration

Reference

@misc{cheng2023sammed2d,
      title={SAM-Med2D}, 
      author={Junlong Cheng and Jin Ye and Zhongying Deng and Jianpin Chen and Tianbin Li and Haoyu Wang and Yanzhou Su and
              Ziyan Huang and Jilong Chen and Lei Jiangand Hui Sun and Junjun He and Shaoting Zhang and Min Zhu and Yu Qiao},
      year={2023},
      eprint={2308.16184},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

@misc{ye2023samed2d20m,
      title={SA-Med2D-20M Dataset: Segment Anything in 2D Medical Imaging with 20 Million masks}, 
      author={Jin Ye and Junlong Cheng and Jianpin Chen and Zhongying Deng and Tianbin Li and Haoyu Wang and Yanzhou Su and Ziyan Huang and Jilong Chen and Lei Jiang and Hui Sun and Min Zhu and Shaoting Zhang and Junjun He and Yu Qiao},
      year={2023},
      eprint={2311.11969},
      archivePrefix={arXiv},
      primaryClass={eess.IV}
}