Clinically, proper polyp localization in endoscopy images plays a vital role in the follow-up treatment (e.g., surgical planning). Deep convolutional neural networks (CNNs) provide a favoured prospect for automatic polyp segmentation and evade the limitations of visual inspection, e.g., subjectivity and overwork. However, most existing CNNs-based methods often provide unsatisfactory segmentation performance. In this paper, we propose a novel boundary constraint network, namely BCNet, for accurate polyp segmentation. The success of BCNet benefits from integrating cross-level context information and leveraging edge information. Specifically, to avoid the drawbacks caused by simple feature addition or concentration, BCNet applies a cross-layer feature integration strategy (CFIS) in fusing the features of the top-three highest layers, yielding a better performance. CFIS consists of three attention-driven cross-layer feature interaction modules (ACFIMs) and two global feature integration modules (GFIMs). ACFIM adaptively fuses the context information of the top-three highest layers via the self-attention mechanism instead of direct addition or concentration. GFIM integrates the fused information across layers with the guidance from global attention. To obtain accurate boundaries, BCNet introduces a bilateral boundary extraction module that explores the polyp and non-polyp information of the shallow layer collaboratively based on the high-level location information and boundary supervision. Through joint supervision of the polyp area and boundary, BCNet is able to get more accurate polyp masks. Experimental results on three public datasets show that the proposed BCNet outperforms seven state-of-the-art competing methods in terms of both effectiveness and generalization.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/JBHI.2022.3173948 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!