AI Article Synopsis

  • The study introduces RFS-Net, a deep learning model designed to accurately segment and identify multi-class retinal fluids (MRF) like intraretinal fluid (IRF), subretinal fluid (SRF), and pigment epithelial detachment (PED) from optical coherence tomography (OCT) images, enhancing anti-VEGF therapy precision.
  • RFS-Net uses advanced architecture features like atrous spatial pyramid pooling and residual blocks, trained on a dataset of OCT scans from diverse sources, to improve segmentation accuracy while maintaining global information.
  • The model showed promising performance with mean F scores of around 0.76 to 0.81 for MRF segmentation, significantly increasing efficiency compared to traditional manual methods.

Article Abstract

Background: In anti-vascular endothelial growth factor (anti-VEGF) therapy, an accurate estimation of multi-class retinal fluid (MRF) is required for the activity prescription and intravitreal dose. This study proposes an end-to-end deep learning-based retinal fluids segmentation network (RFS-Net) to segment and recognize three MRF lesion manifestations, namely, intraretinal fluid (IRF), subretinal fluid (SRF), and pigment epithelial detachment (PED), from multi-vendor optical coherence tomography (OCT) imagery. The proposed image analysis tool will optimize anti-VEGF therapy and contribute to reducing the inter- and intra-observer variability.

Method: The proposed RFS-Net architecture integrates the atrous spatial pyramid pooling (ASPP), residual, and inception modules in the encoder path to learn better features and conserve more global information for precise segmentation and characterization of MRF lesions. The RFS-Net model is trained and validated using OCT scans from multiple vendors (Topcon, Cirrus, Spectralis), collected from three publicly available datasets. The first dataset consisted of OCT volumes obtained from 112 subjects (a total of 11,334 B-scans) is used for both training and evaluation purposes. Moreover, the remaining two datasets are only used for evaluation purposes to check the trained RFS-Net's generalizability on unseen OCT scans. The two evaluation datasets contain a total of 1572 OCT B-scans from 1255 subjects. The performance of the proposed RFS-Net model is assessed through various evaluation metrics.

Results: The proposed RFS-Net model achieved the mean F scores of 0.762, 0.796, and 0.805 for segmenting IRF, SRF, and PED. Moreover, with the automated segmentation of the three retinal manifestations, the RFS-Net brings a considerable gain in efficiency compared to the tedious and demanding manual segmentation procedure of the MRF.

Conclusions: Our proposed RFS-Net is a potential diagnostic tool for the automatic segmentation of MRF (IRF, SRF, and PED) lesions. It is expected to strengthen the inter-observer agreement, and standardization of dosimetry is envisaged as a result.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.compbiomed.2021.104727DOI Listing

Publication Analysis

Top Keywords

proposed rfs-net
16
oct scans
12
anti-vegf therapy
12
rfs-net model
12
segmentation characterization
8
multi-class retinal
8
retinal fluid
8
evaluation purposes
8
irf srf
8
srf ped
8

Similar Publications

Article Synopsis
  • The study introduces RFS-Net, a deep learning model designed to accurately segment and identify multi-class retinal fluids (MRF) like intraretinal fluid (IRF), subretinal fluid (SRF), and pigment epithelial detachment (PED) from optical coherence tomography (OCT) images, enhancing anti-VEGF therapy precision.
  • RFS-Net uses advanced architecture features like atrous spatial pyramid pooling and residual blocks, trained on a dataset of OCT scans from diverse sources, to improve segmentation accuracy while maintaining global information.
  • The model showed promising performance with mean F scores of around 0.76 to 0.81 for MRF segmentation, significantly increasing efficiency compared to traditional manual methods.
View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!