Deep convolutional generative adversarial networks (GAN) allow for creating images from existing databases. We applied a modified light-weight GAN (FastGAN) algorithm to cerebral blood flow SPECTs and aimed to evaluate whether this technology can generate created images close to real patients. Investigating three anatomical levels (cerebellum, CER; basal ganglia, BG; cortex, COR), 551 normal (248 CER, 174 BG, 129 COR) and 387 pathological brain SPECTs using N-isopropyl p-I-123-iodoamphetamine (I-IMP) were included. For the latter scans, cerebral ischemic disease comprised 291 uni- (66 CER, 116 BG, 109 COR) and 96 bilateral defect patterns (44 BG, 52 COR). Our model was trained using a three-compartment anatomical input (dataset 'A'; including CER, BG, and COR), while for dataset 'B', only one anatomical region (COR) was included. Quantitative analyses provided mean counts (MC) and left/right (LR) hemisphere ratios, which were then compared to quantification from real images. For MC, 'B' was significantly different for normal and bilateral defect patterns (P < 0.0001, respectively), but not for unilateral ischemia (P = 0.77). Comparable results were recorded for LR, as normal and ischemia scans were significantly different relative to images acquired from real patients (P ≤ 0.01, respectively). Images provided by 'A', however, revealed comparable quantitative results when compared to real images, including normal (P = 0.8) and pathological scans (unilateral, P = 0.99; bilateral, P = 0.68) for MC. For LR, only uni- (P = 0.03), but not normal or bilateral defect scans (P ≥ 0.08) reached significance relative to images of real patients. With a minimum of only three anatomical compartments serving as stimuli, created cerebral SPECTs are indistinguishable to images from real patients. The applied FastGAN algorithm may allow to provide sufficient scan numbers in various clinical scenarios, e.g., for "data-hungry" deep learning technologies or in the context of orphan diseases.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9637159 | PMC |
http://dx.doi.org/10.1038/s41598-022-23325-3 | DOI Listing |
PLoS One
January 2025
School of Information Science and Engineering, Xinjiang University, Urumqi, China.
Anomaly detection is crucial in areas such as financial fraud identification, cybersecurity defense, and health monitoring, as it directly affects the accuracy and security of decision-making. Existing generative adversarial nets (GANs)-based anomaly detection methods overlook the importance of local density, limiting their effectiveness in detecting anomaly objects in complex data distributions. To address this challenge, we introduce a generative adversarial local density-based anomaly detection (GALD) method, which combines the data distribution modeling capabilities of GANs with local synthetic density analysis.
View Article and Find Full Text PDFTransl Vis Sci Technol
January 2025
Glaucoma Service, Wills Eye Hospital, Philadelphia, PA, USA.
Purpose: The integration of artificial intelligence (AI), particularly deep learning (DL), with optical coherence tomography (OCT) offers significant opportunities in the diagnosis and management of glaucoma. This article explores the application of various DL models in enhancing OCT capabilities and addresses the challenges associated with their clinical implementation.
Methods: A review of articles utilizing DL models was conducted, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), generative adversarial networks (GANs), autoencoders, and large language models (LLMs).
CNS Neurosci Ther
January 2025
Department of Radiology, Affiliated Hangzhou First People's Hospital, Westlake University School of Medicine, Hangzhou, China.
Aims: To develop a transformer-based generative adversarial network (trans-GAN) that can generate synthetic material decomposition images from single-energy CT (SECT) for real-time detection of intracranial hemorrhage (ICH) after endovascular thrombectomy.
Materials: We retrospectively collected data from two hospitals, consisting of 237 dual-energy CT (DECT) scans, including matched iodine overlay maps, virtual noncontrast, and simulated SECT images. These scans were randomly divided into a training set (n = 190) and an internal validation set (n = 47) in a 4:1 ratio based on the proportion of ICH.
J Imaging
January 2025
School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou 213000, China.
Manual labeling of lesions in medical image analysis presents a significant challenge due to its labor-intensive and inefficient nature, which ultimately strains essential medical resources and impedes the advancement of computer-aided diagnosis. This paper introduces a novel medical image-segmentation framework named Efficient Generative-Adversarial U-Net (EGAUNet), designed to facilitate rapid and accurate multi-organ labeling. To enhance the model's capability to comprehend spatial information, we propose the Global Spatial-Channel Attention Mechanism (GSCA).
View Article and Find Full Text PDFJ Imaging
January 2025
Department of Obstetrics and Gynecology, Faculty of Medicine, Chulalongkorn University, Bangkok 10330, Thailand.
The current process of embryo selection in in vitro fertilization is based on morphological criteria; embryos are manually evaluated by embryologists under subjective assessment. In this study, a deep learning-based pipeline was developed to classify the viability of embryos using combined inputs, including microscopic images of embryos and additional features, such as patient age and developed pseudo-features, including a continuous interpretation of Istanbul grading scores by predicting the embryo stage, inner cell mass, and trophectoderm. For viability prediction, convolution-based transferred learning models were employed, multiple pretrained models were compared, and image preprocessing techniques and hyperparameter optimization via Optuna were utilized.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!