Awesome
PraNet: Parallel Reverse Attention Network for Polyp Segmentation (MICCAI2020-Oral & MICCAI2024 Young Scientist Publication Impact Award Shortlist)
Authors: Deng-Ping Fan, Ge-Peng Ji, Tao Zhou, Geng Chen, Huazhu Fu, Jianbing Shen, and Ling Shao.
- We receive the award of Jittor Developer Conference Distinguish Paper & Most Influential (Application) Paper <p align="center"> <img src="imgs/PraNet-Award.png"/> <br /> </p>
- We are in the MICCAI2024 Young Scientist Publication Impact Award Shortlist
1. Preface
-
This repository provides code for "PraNet: Parallel Reverse Attention Network for Polyp Segmentation" MICCAI-2020. (paper | 中文版)
-
If you have any questions about our paper, feel free to contact me. And if you are using PraNet or evaluation toolbox for your research, please cite this paper (BibTeX).
1.1. :fire: NEWS :fire:
-
[2022/11/26] Our PraNet has been developed on Huawei Ascend platform, where the project could be found at Gitee and CSDN introduction.
-
[2022/03/27] :boom: We release a new large-scale dataset on Video Polyp Segmentation (VPS) task, please enjoy it. ProjectLink/ PDF.
-
[2021/12/26] :boom: PraNet模型在Jittor Developer Conference 2021中荣获「最具影响力计图论文(应用)奖」
-
[2021/09/07] The Jittor convertion of PraNet (inference code) is available right now. It has robust inference efficiency compared to PyTorch version, please enjoy it. Many thanks to Yu-Cheng Chou for the excellent conversion from pytorch framework.
-
[2021/09/05] The Tensorflow (Keras) implementation of PraNet (ResNet50/MobileNetV2 version) is released in github-link. Thanks Tauhid Khan.
-
[2021/08/18] Improved version (PraNet-V2) has been released: https://github.com/DengPingFan/Polyp-PVT.
-
[2021/04/23] We update the results on four Camouflaged Object Detection (COD) testing dataset (i.e., COD10K, NC4K, CAMO, and CHAMELEON) of our PraNet, which is the retained on COD dataset from scratch. Download links at google drive are avaliable here: result, model weight, evaluation results.
-
[2021/01/21] :boom: Our PraNet has been used as the base segmentation model of Prof. Michael I. Jordan et al's recent work (Distribution-Free, Risk-Controlling Prediction Sets, Journal of the ACM 2021).
-
[2021/01/10] :boom: Our PraNet achieved the Top-1 ranking on the camouflaged object detection task (link).
-
[2020/09/18] Upload the pre-computed maps.
-
[2020/05/28] Upload pre-trained weights.
-
[2020/06/24] Release training/testing code.
-
[2020/03/24] Create repository.
1.2. Table of Contents
- PraNet: Parallel Reverse Attention Network for Polyp Segmentation (MICCAI2020-Oral)
<small><i><a href='http://ecotrust-canada.github.io/markdown-toc/'>Table of contents generated with markdown-toc</a></i></small>
1.3. State-of-the-art Approaches
- "Selective feature aggregation network with area-boundary constraints for polyp segmentation." IEEE Transactions on Medical Imaging, 2019. paper link: https://link.springer.com/chapter/10.1007/978-3-030-32239-7_34
- "PraNet: Parallel Reverse Attention Network for Polyp Segmentation" IEEE Transactions on Medical Imaging, 2020. paper link: https://link.springer.com/chapter/10.1007%2F978-3-030-59725-2_26
- "Hardnet-mseg: A simple encoder-decoder polyp segmentation neural network that achieves over 0.9 mean dice and 86 fps" arXiv, 2021 paper link: https://arxiv.org/pdf/2101.07172.pdf
- "TransFuse: Fusing Transformers and CNNs for Medical Image Segmentation" arXiv, 2021. paper link: https://arxiv.org/pdf/2102.08005.pdf
- "Automatic Polyp Segmentation via Multi-scale Subtraction Network" MICCAI, 2021. paper link: https://arxiv.org/pdf/2108.05082.pdf
- "CCBANet: Cascading Context and Balancing Attention for Polyp Segmentation" MICCAI, 2021. paper link: https://link.springer.com/book/10.1007/978-3-030-87193-2?noAccess=true
- "Double Encoder-Decoder Networks for Gastrointestinal Polyp Segmentation" MICCAI, 2021. paper link: https://arxiv.org/pdf/2110.01939.pdf
- "HRENet: A Hard Region Enhancement Network for Polyp Segmentation" MICCAI, 2021. paper link: https://link.springer.com/book/10.1007/978-3-030-87193-2?noAccess=true
- "Learnable Oriented-Derivative Network for Polyp Segmentation" MICCAI, 2021. paper link: https://link.springer.com/book/10.1007/978-3-030-87193-2?noAccess=true
- "Shallow attention network for polyp segmentation" MICCAI, 2021. paper link: https://arxiv.org/pdf/2108.00882.pdf
The latest trends in image-/video-based polyp segmentation refer to AWESOME_VPS.md.
2. Overview
2.1. Introduction
Colonoscopy is an effective technique for detecting colorectal polyps, which are highly related to colorectal cancer. In clinical practice, segmenting polyps from colonoscopy images is of great importance since it provides valuable information for diagnosis and surgery. However, accurate polyp segmentation is a challenging task, for two major reasons: (i) the same type of polyps has a diversity of size, color and texture; and (ii) the boundary between a polyp and its surrounding mucosa is not sharp.
To address these challenges, we propose a parallel reverse attention network (PraNet) for accurate polyp segmentation in colonoscopy images. Specifically, we first aggregate the features in high-level layers using a parallel partial decoder (PPD). Based on the combined feature, we then generate a global map as the initial guidance area for the following components. In addition, we mine the boundary cues using a reverse attention (RA) module, which is able to establish the relationship between areas and boundary cues. Thanks to the recurrent cooperation mechanism between areas and boundaries, our PraNet is capable of calibrating any misaligned predictions, improving the segmentation accuracy.
Quantitative and qualitative evaluations on five challenging datasets across six metrics show that our PraNet improves the segmentation accuracy significantly, and presents a number of advantages in terms of generalizability, and real-time segmentation efficiency (∼50fps).
2.2. Framework Overview
<p align="center"> <img src="imgs/framework-final-min.png"/> <br /> <em> Figure 1: Overview of the proposed PraNet, which consists of three reverse attention modules with a parallel partial decoder connection. See § 2 in the paper for details. </em> </p>2.3. Qualitative Results
<p align="center"> <img src="imgs/qualitative_results.png"/> <br /> <em> Figure 2: Qualitative Results. </em> </p>3. Proposed Baseline
3.1. Training/Testing
The training and testing experiments are conducted using PyTorch with a single GeForce RTX TITAN GPU of 24 GB Memory.
Note that our model also supports low memory GPU, which means you can lower the batch size
-
Configuring your environment (Prerequisites):
Note that PraNet is only tested on Ubuntu OS with the following environments. It may work on other operating systems as well but we do not guarantee that it will.
-
Creating a virtual environment in terminal:
conda create -n PraNet python=3.6
. -
Installing necessary packages: PyTorch 1.1
-
-
Downloading necessary data:
-
downloading testing dataset and move it into
./data/TestDataset/
, which can be found in this Google Drive Link (327.2MB). It contains five sub-datsets: CVC-300 (60 test samples), CVC-ClinicDB (62 test samples), CVC-ColonDB (380 test samples), ETIS-LaribPolypDB (196 test samples), Kvasir (100 test samples). -
downloading training dataset and move it into
./data/TrainDataset/
, which can be found in this Google Drive Link (399.5MB). It contains two sub-datasets: Kvasir-SEG (900 train samples) and CVC-ClinicDB (550 train samples). -
downloading pretrained weights and move it into
snapshots/PraNet_Res2Net/PraNet-19.pth
, which can be found in this Google Drive Link (124.6MB). -
downloading Res2Net weights Google Drive (98.4MB).
-
-
Training Configuration:
-
Assigning your costumed path, like
--train_save
and--train_path
inMyTrain.py
. -
Just enjoy it!
-
-
Testing Configuration:
-
After you download all the pre-trained model and testing dataset, just run
MyTest.py
to generate the final prediction map: replace your trained model directory (--pth_path
). -
Just enjoy it!
-
3.2 Evaluating your trained model:
Matlab: One-key evaluation is written in MATLAB code (Google Drive Link),
please follow this the instructions in ./eval/main.m
and just run it to generate the evaluation results in ./res/
.
The complete evaluation toolbox (including data, map, eval code, and res): Google Drive Link (380.6MB).
Python: Please refer to the work of ACMMM2021 https://github.com/plemeri/UACANet
3.3 Pre-computed maps:
They can be found in Google Drive Link (61.6MB).
4. Citation
Please cite our paper if you find the work useful:
@inproceedings{fan2020pranet,
title={Pranet: Parallel reverse attention network for polyp segmentation},
author={Fan, Deng-Ping and Ji, Ge-Peng and Zhou, Tao and Chen, Geng and Fu, Huazhu and Shen, Jianbing and Shao, Ling},
booktitle={International conference on medical image computing and computer-assisted intervention},
pages={263--273},
year={2020},
organization={Springer}
}
5. TODO LIST
If you want to improve the usability or any piece of advice, please feel free to contact me directly (E-mail).
-
Support
NVIDIA APEX
training. -
Support different backbones ( VGGNet, ResNet, ResNeXt, iResNet, and ResNeSt etc.)
-
Support distributed training.
-
Support lightweight architecture and real-time inference, like MobileNet, SqueezeNet.
-
Add more comprehensive competitors.
6. FAQ
-
If the image cannot be loaded in the page (mostly in the domestic network situations).
7. License
The source code is free for research and education use only. Any comercial use should get formal permission first.