Fast and reliable quantification of cone photoreceptors is a bottleneck in the clinical utilization of adaptive optics scanning light ophthalmoscope (AOSLO) systems for the study, diagnosis, and prognosis of retinal diseases. To-date, manual grading has been the sole reliable source of AOSLO quantification, as no automatic method has been reliably utilized for cone detection in real-world low-quality images of diseased retina. We present a novel deep learning based approach that combines information from both the confocal and non-confocal split detector AOSLO modalities to detect cones in subjects with achromatopsia. Our dual-mode deep learning based approach outperforms the state-of-the-art automated techniques and is on a par with human grading.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6191607PMC
http://dx.doi.org/10.1364/BOE.9.003740DOI Listing

Publication Analysis

Top Keywords

deep learning
12
learning based
12
cone photoreceptors
8
adaptive optics
8
optics scanning
8
scanning light
8
light ophthalmoscope
8
based approach
8
based detection
4
detection cone
4

Similar Publications

Generative Adversarial Networks for Neuroimage Translation.

J Comput Biol

December 2024

Electrical, Computer and Biomedical Engineering, Toronto Metropolitan University, Toronto, Canada.

Image-to-image translation has gained popularity in the medical field to transform images from one domain to another. Medical image synthesis via domain transformation is advantageous in its ability to augment an image dataset where images for a given class are limited. From the learning perspective, this process contributes to the data-oriented robustness of the model by inherently broadening the model's exposure to more diverse visual data and enabling it to learn more generalized features.

View Article and Find Full Text PDF

Objective: The aim of this study is to determine the contact relationship and position of impacted mandibular third molar teeth (IMM) with the mandibular canal (MC) in panoramic radiography (PR) images using deep learning (DL) models trained with the help of cone beam computed tomography (CBCT) and DL to compare the performances of the architectures.

Methods: In this study, a total of 546 IMMs from 290 patients with CBCT and PR images were included. The performances of SqueezeNet, GoogLeNet, and Inception-v3 architectures in solving four problems on two different regions of interest (RoI) were evaluated.

View Article and Find Full Text PDF

Background/objectives: Calculating the radiation dose from CT in F-PET/CT examinations poses a significant challenge. The objective of this study is to develop a deep learning-based automated program that standardizes the measurement of radiation doses.

Methods: The torso CT was segmented into six distinct regions using TotalSegmentator.

View Article and Find Full Text PDF

Objectives: We evaluated the noise reduction effects of deep learning reconstruction (DLR) and hybrid iterative reconstruction (HIR) in brain computed tomography (CT).

Methods: CT images of a 16 cm dosimetry phantom, a head phantom, and the brains of 11 patients were reconstructed using filtered backprojection (FBP) and various levels of DLR and HIR. The slice thickness was 5, 2.

View Article and Find Full Text PDF

Breast cancer is a leading cause of mortality among women in Taiwan and globally. Non-invasive imaging methods, such as mammography and ultrasound, are critical for early detection, yet standalone modalities have limitations in regard to their diagnostic accuracy. This study aims to enhance breast cancer detection through a cross-modality fusion approach combining mammography and ultrasound imaging, using advanced convolutional neural network (CNN) architectures.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!