Clinically, retinal vessel segmentation is a significant step in the diagnosis of fundus diseases. However, recent methods generally neglect the difference of semantic information between deep and shallow features, which fail to capture the global and local characterizations in fundus images simultaneously, resulting in the limited segmentation performance for fine vessels. In this article, a global transformer (GT) and dual local attention (DLA) network via deep-shallow hierarchical feature fusion (GT-DLA-dsHFF) are investigated to solve the above limitations. First, the GT is developed to integrate the global information in the retinal image, which effectively captures the long-distance dependence between pixels, alleviating the discontinuity of blood vessels in the segmentation results. Second, DLA, which is constructed using dilated convolutions with varied dilation rates, unsupervised edge detection, and squeeze-excitation block, is proposed to extract local vessel information, consolidating the edge details in the segmentation result. Finally, a novel deep-shallow hierarchical feature fusion (dsHFF) algorithm is studied to fuse the features in different scales in the deep learning framework, respectively, which can mitigate the attenuation of valid information in the process of feature fusion. We verified the GT-DLA-dsHFF on four typical fundus image datasets. The experimental results demonstrate our GT-DLA-dsHFF achieves superior performance against the current methods and detailed discussions verify the efficacy of the proposed three modules. Segmentation results of diseased images show the robustness of our proposed GT-DLA-dsHFF. Implementation codes will be available on https://github.com/YangLibuaa/GT-DLA-dsHFF.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TCYB.2022.3194099DOI Listing

Publication Analysis

Top Keywords

feature fusion
16
deep-shallow hierarchical
12
hierarchical feature
12
global transformer
8
transformer dual
8
dual local
8
local attention
8
network deep-shallow
8
retinal vessel
8
vessel segmentation
8

Similar Publications

Epithelial cell adhesion molecule (EpCAM) fused to IgG, IgA and IgM Fc domains was expressed to create IgG, IgA and IgM-like structures as anti-cancer vaccines in Nicotiana tabacum. High-mannose glycan structures were generated by adding a C-terminal endoplasmic reticulum (ER) retention motif (KDEL) to the Fc domain (FcK) to produce EpCAM-Fc and EpCAM-FcK proteins in transgenic plants via Agrobacterium-mediated transformation. Cross-fertilization of EpCAM-Fc (FcK) transgenic plants with Joining chain (J-chain, J and JK) transgenic plants led to stable expression of large quaternary EpCAM-IgA Fc (EpCAM-A) and IgM-like (EpCAM-M) proteins.

View Article and Find Full Text PDF

Introduction: Functional magnetic resonance imaging (fMRI) data is highly complex and high-dimensional, capturing signals from regions of interest (ROIs) with intricate correlations. Analyzing such data is particularly challenging, especially in resting-state fMRI, where patterns are less identifiable without task-specific contexts. Nonetheless, interconnections among ROIs provide essential insights into brain activity and exhibit unique characteristics across groups.

View Article and Find Full Text PDF

Freshness in Salmon by Hand-Held Devices: Methods in Feature Selection and Data Fusion for Spectroscopy.

ACS Food Sci Technol

December 2024

National Measurement Laboratory: Centre of Excellence in Agriculture and Food Integrity, Institute for Global Food Security, School of Biological Sciences, Queen's University Belfast, Belfast BT9 5DL, U.K.

Salmon fillet was analyzed via hand-held optical devices: fluorescence (@340 nm) and absorption spectroscopy across the visible and near-infrared (NIR) range (400-1900 nm). Spectroscopic measurements were benchmarked with nucleotide assays and potentiometry in an exploratory set of experiments over 11 days, with changes to spectral profiles noted. A second enlarged spectroscopic data set, over a 17 day period, was then acquired, and fillet freshness was classified ±1 day via four machine learning (ML) algorithms: linear discriminant analysis, Gaussian naïve, weighted -nearest neighbors, and an ensemble bagged tree method.

View Article and Find Full Text PDF

Background: Radiomic features and deep features are both vitally helpful for the accurate prediction of tumor information in breast ultrasound. However, whether integrating radiomic features and deep features can improve the prediction performance of tumor information is unclear.

Methods: A feature fusion method based on radiomic features and revised deep features was proposed to predict tumor information.

View Article and Find Full Text PDF

Correctly diagnosing Alzheimer's disease (AD) and identifying pathogenic brain regions and genes play a vital role in understanding the AD and developing effective prevention and treatment strategies. Recent works combine imaging and genetic data, and leverage the strengths of both modalities to achieve better classification results. In this work, we propose MCA-GCN, a Multi-stream Cross-Attention and Graph Convolutional Network-based classification method for AD patients.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!