Severity: Warning
Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 197
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3145
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 316
Function: require_once
Predicting post-Percutaneous Coronary Intervention (PCI) outcomes is crucial for effective patient management and quality improvement in healthcare. However, achieving accurate predictions requires the integration of multimodal clinical data, including physiological signals, demographics, and patient history, to estimate prognosis. The integration of such high-dimensional, multi-modal data presents a significant challenge due to its complexity and the need for sophisticated analytical methods. Our study focuses on comparative performance analysis for state-of-theart vision transformer (ViT) and proposed a novel multi-branch CNN model with block attention for multimodal data analysis in a joint fusion framework. To design a comparative model for ViT, we proposed a new joint fusion architecture that consists of a convolutional neural network (CNN) with a convolutional block attention module (CBAM). We integrate images of electrocardiogram (ECG) data and tabular electronic health records (EHR) of 13,064 subjects, considering 6871 samples for training and 6193 for testing (stratified sampling) in order to predict 3 clinically relevant post-PCI (6 months) clinical endpoints - heart failure, all-cause mortality, and stroke. The learned representations are combined at an intermediate layer, followed by processing these representations using a fully connected layer. The proposed model demonstrates excellent performance with the highest AUROC scores of 0.849, 0.913, and 0.794 for predicting heart failure, all-cause mortality, and stroke, respectively. Surpassing the baseline EHR model and ViT, the proposed CNN + CBAM fusion model showcases superior predictive capabilities for heart failure prediction (DeLong's test p-value = 0.043) which highlights the importance of preserving local spatial features via CNN low-level filters and semi-global dependency using block attention. Without using any laboratory test results and vital data, we obtained state-of-the-art performance using ECG image directly using proposed attention based CNN model and outperformed the ViT baseline. Proposed multimodal integration strategy would lead to the development of more accurate, mutlimodal data-driven models for predicting PCI outcomes. As a result, cardiologists could better tailor treatment plans, optimize patient management strategies, and improve overall clinical outcomes after the complex PCI procedure.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.compbiomed.2025.109966 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!