Severity: Warning
Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 176
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1034
Function: getPubMedXML
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3152
Function: GetPubMedArticleOutput_2016
File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 316
Function: require_once
In recent years, deep learning has become a popular tool to analyze and classify medical images. However, challenges such as limited data availability, high labeling costs, and privacy concerns remain significant obstacles. As such, generative models have been extensively explored as a solution to generate new images and overcome the stated challenges. In this paper, we augment a dataset of chest CT scans for Vertebral Compression Fractures (VCFs) collected from the American University of Beirut Medical Center (AUBMC), specifically targeting the detection of incidental fractures that are often overlooked in routine chest CTs, as these scans are not typically focused on spinal analysis. Our goal is to enhance AI systems to enable automated early detection of such incidental fractures, addressing a critical healthcare gap and leading to improved patient outcomes by catching fractures that might otherwise go undiagnosed. We first generate a synthetic dataset based on the segmented CTSpine1K dataset to simulate real grayscale data that aligns with our specific scenario. Then, we use this generated data to evaluate the generative capabilities of Deep Convolutional Generative Adverserial Networks (DCGANs), variational autoencoders (VAEs), and VAE-GAN models. The VAE-GAN model demonstrated the highest performance, achieving a Fréchet Inception Distance (FID) five times lower than the other architectures. To adapt this model to real-image scenarios, we perform transfer learning on the GAN, training it with the real dataset collected from AUBMC and generating additional samples. Finally, we train a CNN using augmented datasets that include both real and generated synthetic data and compare its performance to training on real data alone. We then evaluate the model exclusively on a test set composed of real images to assess the effect of the generated data on real-world performance. We find that training on augmented datasets significantly improves the classification accuracy on a test set composed of real images by 16 %, increasing it from 73 % to 89 %. This improvement demonstrates that the generated data is of high quality and enhances the model's ability to perform well against unseen, real data.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.compbiomed.2024.109446 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!