In recent times, coronary artery disease (CAD) has become one of the leading causes of morbidity and mortality across the globe. Diagnosing the presence and severity of CAD in individuals is essential for choosing the best course of treatment. Presently, computed tomography (CT) provides high spatial resolution images of the heart and coronary arteries in a short period. On the other hand, there are many challenges in analyzing cardiac CT scans for signs of CAD. Research studies apply machine learning (ML) for high accuracy and consistent performance to overcome the limitations. It allows excellent visualization of the coronary arteries with high spatial resolution. Convolutional neural networks (CNN) are widely applied in medical image processing to identify diseases. However, there is a demand for efficient feature extraction to enhance the performance of ML techniques. The feature extraction process is one of the factors in improving ML techniques' efficiency. Thus, the study intends to develop a method to detect CAD from CT angiography images. It proposes a feature extraction method and a CNN model for detecting the CAD in minimum time with optimal accuracy. Two datasets are utilized to evaluate the performance of the proposed model. The present work is unique in applying a feature extraction model with CNN for CAD detection. The experimental analysis shows that the proposed method achieves 99.2% and 98.73% prediction accuracy, with F1 scores of 98.95 and 98.82 for benchmark datasets. In addition, the outcome suggests that the proposed CNN model achieves the area under the receiver operating characteristic and precision-recall curve of 0.92 and 0.96, 0.91 and 0.90 for datasets 1 and 2, respectively. The findings highlight that the performance of the proposed feature extraction and CNN model is superior to the existing models.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9498285 | PMC |
http://dx.doi.org/10.3390/diagnostics12092073 | DOI Listing |
Network
December 2024
Department of Electronics and Communication Engineering, Dronacharya Group of Institutions, Greater Noida, UP, India.
Speaker verification in text-dependent scenarios is critical for high-security applications but faces challenges such as voice quality variations, linguistic diversity, and gender-related pitch differences, which affect authentication accuracy. This paper introduces a Gender-Aware Siamese-Triplet Network-Deep Neural Network (ST-DNN) architecture to address these challenges. The Gender-Aware Network utilizes Convolutional 2D layers with ReLU activation for initial feature extraction, followed by multi-fusion dense skip connections and batch normalization to integrate features across different depths, enhancing discrimination between male and female speakers.
View Article and Find Full Text PDFSci Rep
December 2024
Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul, 06351, Republic of Korea.
Texture analysis generates image parameters from F-18 fluorodeoxyglucose positron emission tomography/computed tomography (FDG PET/CT). Although some parameters correlate with tumor biology and clinical attributes, their types and implications can be complex. To overcome this limitation, pseudotime analysis was applied to texture parameters to estimate changes in individual sample characteristics, and the prognostic significance of the estimated pseudotime of primary tumors was evaluated.
View Article and Find Full Text PDFSci Rep
December 2024
College of Electrical Engineering, Northeast Electric Power University, Jilin, 132012, China.
The scattering of tiny particles in the atmosphere causes a haze effect on remote sensing images captured by satellites and similar devices, significantly disrupting subsequent image recognition and classification. A generative adversarial network named TRPC-GAN with texture recovery and physical constraints is proposed to mitigate this impact. This network not only effectively removes haze but also better preserves the texture information of the original remote sensing image, thereby enhancing the visual quality of the dehazed image.
View Article and Find Full Text PDFSci Rep
December 2024
School of Electronic Information and Electrical Engineering, Yangtze University, Jingzhou, 434100, Hubei, China.
Emotions play a crucial role in human thoughts, cognitive processes, and decision-making. EEG has become a widely utilized tool in emotion recognition due to its high temporal resolution, real-time monitoring capabilities, portability, and cost-effectiveness. In this paper, we propose a novel end-to-end emotion recognition method from EEG signals, called MSDCGTNet, which is based on the Multi-Scale Dynamic 1D CNN and the Gated Transformer.
View Article and Find Full Text PDFSci Rep
December 2024
Weifang Education Investment Group Co., Ltd., Weifang, 261108, China.
Vehicle re-identification (re-id) technology refers to a vehicle matching under a non-overlapping domain, that is, to confirm whether the vehicle target taken by cameras in different positions at different times is the same vehicle. Different identities of the same type of vehicles are one of the most challenging factors in the field of vehicle re-identification. The key to solve this difficulty is to make full use of the multiple discriminative features of vehicles.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!