https://github.com/ai4colonoscopy/PraNet-V2/tree/main/binary_seg/jittor.
" /> https://github.com/ai4colonoscopy/PraNet-V2/tree/main/binary_seg/jittor." />Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Accurate medical image segmentation is essential for effective diagnosis and treatment. Previously we proposed PraNet-V1 as a means to enhance polyp segmentation, introducing a reverse attention (RA) module that utilizes background information. However, PraNet-V1 struggles with multi-class segmentation tasks. To address this limitation, we here propose PraNet-V2, which can effectively handle a broader range of tasks, including multi-class segmentation. At the core of PraNet-V2 is our dual-supervised reverse attention (DSRA) module, which incorporates explicit background supervision, independent background modeling, and semantically enriched attention fusion. Our PraNet-V2 framework exhibits strong performance on four polyp segmentation datasets. Moreover, the integration of DSRA into three state-of-the-art semantic segmentation models enables iterative refinement of foreground segmentation, yielding improvements of up to 1.36% in mean Dice score. Jittor code and supplementary materials are available at https://github.com/ai4colonoscopy/PraNet-V2/tree/main/binary_seg/jittor.

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
To submit a manuscript, please go to https://jcvm.org.
Comments on this article