Automatic retinal vessel segmentation is important for the diagnosis and prevention of ophthalmic diseases. The existing deep learning retinal vessel segmentation models always treat each pixel equally. However, the multi-scale vessel structure is a vital factor affecting the segmentation results, especially in thin vessels. To address this crucial gap, we propose a novel Fully Attention-based Network (FANet) based on attention mechanisms to adaptively learn rich feature representation and aggregate the multi-scale information. Specifically, the framework consists of the image pre-processing procedure and the semantic segmentation networks. Green channel extraction (GE) and contrast limited adaptive histogram equalization (CLAHE) are employed as pre-processing to enhance the texture and contrast of retinal blood images. Besides, the network combines two types of attention modules with the U-Net. We propose a lightweight dual-direction attention block to model global dependencies and reduce intra-class inconsistencies, in which the weights of feature maps are updated based on the semantic correlation between pixels. The dual-direction attention block utilizes horizontal and vertical pooling operations to produce the attention map. In this way, the network aggregates global contextual information from semantic-closer regions or a series of pixels belonging to the same object category. Meanwhile, we adopt the selective kernel (SK) unit to replace the standard convolution for obtaining multi-scale features of different receptive field sizes generated by soft attention. Furthermore, we demonstrate that the proposed model can effectively identify irregular, noisy, and multi-scale retinal vessels. The abundant experiments on DRIVE, STARE, and CHASE_DB1 datasets show that our method achieves state-of-the-art performance.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/JBHI.2020.3028180 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!