AI Article Synopsis

  • Deep convolutional GANs, specifically the FastGAN algorithm, were used to generate brain SPECT images from existing databases to assess their similarity to real patient images.
  • The study analyzed scans from 551 normal and 387 pathological cases, focusing on three brain regions: cerebellum, basal ganglia, and cortex, while examining both unilateral and bilateral ischemic patterns.
  • Results indicated that images created using a three-compartment input were quantitatively similar to real scans, with normal and unilateral ischemia showing significant differences, suggesting that FastGAN could efficiently augment clinical imaging data.

Article Abstract

Deep convolutional generative adversarial networks (GAN) allow for creating images from existing databases. We applied a modified light-weight GAN (FastGAN) algorithm to cerebral blood flow SPECTs and aimed to evaluate whether this technology can generate created images close to real patients. Investigating three anatomical levels (cerebellum, CER; basal ganglia, BG; cortex, COR), 551 normal (248 CER, 174 BG, 129 COR) and 387 pathological brain SPECTs using N-isopropyl p-I-123-iodoamphetamine (I-IMP) were included. For the latter scans, cerebral ischemic disease comprised 291 uni- (66 CER, 116 BG, 109 COR) and 96 bilateral defect patterns (44 BG, 52 COR). Our model was trained using a three-compartment anatomical input (dataset 'A'; including CER, BG, and COR), while for dataset 'B', only one anatomical region (COR) was included. Quantitative analyses provided mean counts (MC) and left/right (LR) hemisphere ratios, which were then compared to quantification from real images. For MC, 'B' was significantly different for normal and bilateral defect patterns (P < 0.0001, respectively), but not for unilateral ischemia (P = 0.77). Comparable results were recorded for LR, as normal and ischemia scans were significantly different relative to images acquired from real patients (P ≤ 0.01, respectively). Images provided by 'A', however, revealed comparable quantitative results when compared to real images, including normal (P = 0.8) and pathological scans (unilateral, P = 0.99; bilateral, P = 0.68) for MC. For LR, only uni- (P = 0.03), but not normal or bilateral defect scans (P ≥ 0.08) reached significance relative to images of real patients. With a minimum of only three anatomical compartments serving as stimuli, created cerebral SPECTs are indistinguishable to images from real patients. The applied FastGAN algorithm may allow to provide sufficient scan numbers in various clinical scenarios, e.g., for "data-hungry" deep learning technologies or in the context of orphan diseases.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9637159PMC
http://dx.doi.org/10.1038/s41598-022-23325-3DOI Listing

Publication Analysis

Top Keywords

generative adversarial
8
brain spects
8
real patients
8
bilateral defect
8
defect patterns
8
cor
6
adversarial network-created
4
network-created brain
4
spects cerebral
4
cerebral ischemia
4

Similar Publications

Anomaly detection is crucial in areas such as financial fraud identification, cybersecurity defense, and health monitoring, as it directly affects the accuracy and security of decision-making. Existing generative adversarial nets (GANs)-based anomaly detection methods overlook the importance of local density, limiting their effectiveness in detecting anomaly objects in complex data distributions. To address this challenge, we introduce a generative adversarial local density-based anomaly detection (GALD) method, which combines the data distribution modeling capabilities of GANs with local synthetic density analysis.

View Article and Find Full Text PDF

Purpose: The integration of artificial intelligence (AI), particularly deep learning (DL), with optical coherence tomography (OCT) offers significant opportunities in the diagnosis and management of glaucoma. This article explores the application of various DL models in enhancing OCT capabilities and addresses the challenges associated with their clinical implementation.

Methods: A review of articles utilizing DL models was conducted, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), generative adversarial networks (GANs), autoencoders, and large language models (LLMs).

View Article and Find Full Text PDF

Aims: To develop a transformer-based generative adversarial network (trans-GAN) that can generate synthetic material decomposition images from single-energy CT (SECT) for real-time detection of intracranial hemorrhage (ICH) after endovascular thrombectomy.

Materials: We retrospectively collected data from two hospitals, consisting of 237 dual-energy CT (DECT) scans, including matched iodine overlay maps, virtual noncontrast, and simulated SECT images. These scans were randomly divided into a training set (n = 190) and an internal validation set (n = 47) in a 4:1 ratio based on the proportion of ICH.

View Article and Find Full Text PDF

Efficient Generative-Adversarial U-Net for Multi-Organ Medical Image Segmentation.

J Imaging

January 2025

School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou 213000, China.

Manual labeling of lesions in medical image analysis presents a significant challenge due to its labor-intensive and inefficient nature, which ultimately strains essential medical resources and impedes the advancement of computer-aided diagnosis. This paper introduces a novel medical image-segmentation framework named Efficient Generative-Adversarial U-Net (EGAUNet), designed to facilitate rapid and accurate multi-organ labeling. To enhance the model's capability to comprehend spatial information, we propose the Global Spatial-Channel Attention Mechanism (GSCA).

View Article and Find Full Text PDF

The current process of embryo selection in in vitro fertilization is based on morphological criteria; embryos are manually evaluated by embryologists under subjective assessment. In this study, a deep learning-based pipeline was developed to classify the viability of embryos using combined inputs, including microscopic images of embryos and additional features, such as patient age and developed pseudo-features, including a continuous interpretation of Istanbul grading scores by predicting the embryo stage, inner cell mass, and trophectoderm. For viability prediction, convolution-based transferred learning models were employed, multiple pretrained models were compared, and image preprocessing techniques and hyperparameter optimization via Optuna were utilized.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!