Background Accurate detection of arrhythmic events in the intensive care units (ICU) is of paramount significance in providing timely care. However, traditional ICU monitors generate a high rate of false alarms causing alarm fatigue. In this work, we develop an algorithm to improve life threatening arrhythmia detection in the ICUs using a deep learning approach. Methods and Results This study involves a total of 953 independent life-threatening arrhythmia alarms generated from the ICU bedside monitors of 410 patients. Specifically, we used the ECG (4 channels), arterial blood pressure, and photoplethysmograph signals to accurately detect the onset and offset of various arrhythmias, without prior knowledge of the alarm type. We used a hybrid convolutional neural network based classifier that fuses traditional handcrafted features with features automatically learned using convolutional neural networks. Further, the proposed architecture remains flexible to be adapted to various arrhythmic conditions as well as multiple physiological signals. Our hybrid- convolutional neural network approach achieved superior performance compared with methods which only used convolutional neural network. We evaluated our algorithm using 5-fold cross-validation for 5 times and obtained an accuracy of 87.5%±0.5%, and a score of 81%±0.9%. Independent evaluation of our algorithm on the publicly available PhysioNet 2015 Challenge database resulted in overall classification accuracy and score of 93.9% and 84.3%, respectively, indicating its efficacy and generalizability. Conclusions Our method accurately detects multiple arrhythmic conditions. Suitable translation of our algorithm may significantly improve the quality of care in ICUs by reducing the burden of false alarms.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9075394 | PMC |
http://dx.doi.org/10.1161/JAHA.121.023222 | DOI Listing |
PLoS One
January 2025
Renewable Energy Science and Engineering Department, Faculty of Postgraduate Studies for Advanced Sciences (PSAS), Beni-Suef University, Beni-Suef, Egypt.
This study presents a comprehensive comparative analysis of Machine Learning (ML) and Deep Learning (DL) models for predicting Wind Turbine (WT) power output based on environmental variables such as temperature, humidity, wind speed, and wind direction. Along with Artificial Neural Network (ANN), Long Short-Term Memory (LSTM), Recurrent Neural Network (RNN), and Convolutional Neural Network (CNN), the following ML models were looked at: Linear Regression (LR), Support Vector Regressor (SVR), Random Forest (RF), Extra Trees (ET), Adaptive Boosting (AdaBoost), Categorical Boosting (CatBoost), Extreme Gradient Boosting (XGBoost), and Light Gradient Boosting Machine (LightGBM). Using a dataset of 40,000 observations, the models were assessed based on R-squared, Mean Absolute Error (MAE), and Root Mean Square Error (RMSE).
View Article and Find Full Text PDFTransl Vis Sci Technol
January 2025
School of Optometry and Vision Science, University of New South Wales, Sydney, Australia.
Purpose: The purpose of this study was to develop and validate a deep-learning model for noninvasive anemia detection, hemoglobin (Hb) level estimation, and identification of anemia-related retinal features using fundus images.
Methods: The dataset included 2265 participants aged 40 years and above from a population-based study in South India. The dataset included ocular and systemic clinical parameters, dilated retinal fundus images, and hematological data such as complete blood counts and Hb concentration levels.
Transl Vis Sci Technol
January 2025
Department of Biomedical Engineering, Faculty of Engineering, Mahidol University, Nakhon Pathom, Thailand.
Purpose: The purpose of this study was to develop a deep learning approach that restores artifact-laden optical coherence tomography (OCT) scans and predicts functional loss on the 24-2 Humphrey Visual Field (HVF) test.
Methods: This cross-sectional, retrospective study used 1674 visual field (VF)-OCT pairs from 951 eyes for training and 429 pairs from 345 eyes for testing. Peripapillary retinal nerve fiber layer (RNFL) thickness map artifacts were corrected using a generative diffusion model.
J Acoust Soc Am
January 2025
University of Bath, Bath, United Kingdom.
Improved hardware and processing techniques such as synthetic aperture sonar have led to imaging sonar with centimeter resolution. However, practical limitations and old systems limit the resolution in modern and legacy datasets. This study proposes using single image super resolution based on a conditioned diffusion model to map between images at different resolutions.
View Article and Find Full Text PDFEur Heart J Digit Health
January 2025
Hunan Key Laboratory of Biomedical Nanomaterials and Devices, Hunan University of Technology, No. 88 West Taishan Road, Zhuzhou 412007, Hunan, China.
Aims: The electrocardiogram (ECG) is the primary method for diagnosing atrial fibrillation (AF), but interpreting ECGs can be time-consuming and labour-intensive, which deserves more exploration.
Methods And Results: We collected ECG data from 6590 patients as YY2023, classified as Normal, AF, and Other. Convolutional Neural Network (CNN), bidirectional Long Short-Term Memory (BiLSTM), and Attention construct the AF recognition model CNN BiLSTM Attention-Atrial Fibrillation (CLA-AF).
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!