Home

Awesome

<p align=center>Video Polyp Segmentation: A Deep Learning Perspective (MIR 2022)</p><!-- omit in toc -->

license: mit LAST COMMIT ISSUES STARS ARXIV PAPER Gitter PWC PWC

<p align="center"> <img src="./assets/background-min.gif"/> <br /> </p>

https://github.com/GewelsJI/VPS/assets/38354957/9bea01ae-9582-494f-8bf6-f83307eebc08

Contents<!-- omit in toc -->

1. Features

In the deep learning era, we present the first comprehensive video polyp segmentation (VPS) study. Over the years, developments on VPS have not moved forward with ease since large-scale fine-grained segmentation masks are still not made publicly available. To tackle this issue, we first introduce a long-awaited high-quality per-frame annotated VPS dataset. There are four features of our work:

2. News

3. VPS Dataset

<p align="center"> <img src="./assets/Pathological-min.gif"/> <br /> <em> Figure 1: Annotation of SUN-SEG dataset. The object-level segmentation masks in the SUN-SEG dataset of different pathological categories, which is densely annotated with experienced annotators and verified by colonoscopy-related researchers to ensure the quality of the proposed dataset. </em> </p>

Notably, based on some necessary privacy-preserving considerations from the SUN dataset, we could not directly share the download link of the video dataset with you without authorization. And please inform us of your institution and the purpose of using SUN-SEG in the email. Thank you for your understanding!

4. VPS Baseline

This work is the extension version of our conference paper (Progressively Normalized Self-Attention Network for Video Polyp Segmentation) accepted at MICCAI-2021. More details could refer to arXiv and Github Link

<p align="center"> <img src="./assets/PNSPlus-Framework.png"/> <br /> <em> Figure 2: The pipeline of the proposed (a) PNS+ network, which is based on (b) the normalized self-attention (NS) block. </em> </p>

There are three simple-to-use steps to access our project code (PNS+):

5. VPS Benchmark

We provide an out-of-the-box evaluation toolbox for the VPS task, which is written in Python style. You can just run it to generate the evaluation results on your custom approach. Or you can directly download the complete VPS benchmark including the prediction map of each competitor at the download link: Google Drive, 5.45GB / Baidu Drive (Password: 2t1l, Size: 5.45G).

We also built an online leaderboard to keep up with the new progress of other competitors. We believe this is a fun way to learn about new research directions and stay in tune with our VPS community.

Here, we present a variety of qualitative and quantitative results of VPS benchmarks:

<p align="center"> <img src="./assets/Qual-min.gif"/> <br /> <em> Figure 3: Qualitative comparison of three video-based models (PNS+, PNSNet, and 2/3D) and two image-based models (ACSNet, and PraNet). </em> </p> <p align="center"> <img src="./assets/ModelPerformance.png"/> <br /> <em> Figure 4: Quantitative comparison on two testing sub-datasets, i.e., SUN-SEG-Easy (Unseen) and SUN-SEG-Hard (Unseen). `R/T' represents we re-train the non-public model, whose code is provided by the original authors. The best scores are highlighted in bold. </em> </p> <p align="center"> <img src="./assets/AttributePerformance.png"/> <br /> <em> Figure 5: Visual attributes-based performance on our SUN-SEG-Easy (Unseen) and SUN-SEG-Hard (Unseen) in terms of structure measure. </em> </p>

6. Tracking Trends

<p align="center"> <img src="./assets/the-reading-list.png"/> <br /> </p>

To better understand the development of this field and to quickly push researchers in their research process, we elaborately build a Paper Reading List. It includes 119 colonoscopy imaging-based AI scientific research in the past 12 years. It includes several fields, such as image polyp segmentation, video polyp segmentation, image polyp detection, video polyp detection, and image polyp classification. Besides, we will provide some interesting resources about human colonoscopy.

Note: If we miss some treasure works, please let me know via e-mail or directly push a PR. We will work on it as soon as possible. Many thanks for your active feedback.

7. Citations

If you have found our work useful, please use the following reference to cite this project:

@article{ji2022video,
  title={Video polyp segmentation: A deep learning perspective},
  author={Ji, Ge-Peng and Xiao, Guobao and Chou, Yu-Cheng and Fan, Deng-Ping and Zhao, Kai and Chen, Geng and Van Gool, Luc},
  journal={Machine Intelligence Research},
  volume={19},
  number={6},
  pages={531--549},
  year={2022},
  publisher={Springer}
}


@inproceedings{ji2021progressively,
  title={Progressively normalized self-attention network for video polyp segmentation},
  author={Ji, Ge-Peng and Chou, Yu-Cheng and Fan, Deng-Ping and Chen, Geng and Fu, Huazhu and Jha, Debesh and Shao, Ling},
  booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
  pages={142--152},
  year={2021},
  organization={Springer}
}

@inproceedings{fan2020pranet,
  title={Pranet: Parallel reverse attention network for polyp segmentation},
  author={Fan, Deng-Ping and Ji, Ge-Peng and Zhou, Tao and Chen, Geng and Fu, Huazhu and Shen, Jianbing and Shao, Ling},
  booktitle={International conference on medical image computing and computer-assisted intervention},
  pages={263--273},
  year={2020},
  organization={Springer}
}

8. FAQ

9. License

The dataset and source code is free for research and education use only. Any commercial usage should get formal permission first.

10. Acknowledgments