Background: Estimating the depth of anaesthesia (DoA) is critical in modern anaesthetic practice. Multiple DoA monitors based on electroencephalograms (EEGs) have been widely used for DoA monitoring; however, these monitors may be inaccurate under certain conditions. In this work, we hypothesize that heart rate variability (HRV)-derived features based on a deep neural network can distinguish different anaesthesia states, providing a secondary tool for DoA assessment.

Methods: A novel method of distinguishing different anaesthesia states was developed based on four HRV-derived features in the time and frequency domain combined with a deep neural network. Four features were extracted from an electrocardiogram, including the HRV high-frequency power, low-frequency power, high-to-low-frequency power ratio, and sample entropy. Next, these features were used as inputs for the deep neural network, which utilized the expert assessment of consciousness level as the reference output. Finally, the deep neural network was compared with the logistic regression, support vector machine, and decision tree models. The datasets of 23 anaesthesia patients were used to assess the proposed method.

Results: The accuracies of the four models, in distinguishing the anaesthesia states, were 86.2% (logistic regression), 87.5% (support vector machine), 87.2% (decision tree), and 90.1% (deep neural network). The accuracy of deep neural network was higher than those of the logistic regression (p < 0.05), support vector machine (p < 0.05), and decision tree (p < 0.05) approaches. Our method outperformed the logistic regression, support vector machine, and decision tree methods.

Conclusions: The incorporation of four HRV-derived features in the time and frequency domain and a deep neural network could accurately distinguish between different anaesthesia states; however, this study is a pilot feasibility study. The proposed method-with other evaluation methods, such as EEG-is expected to assist anaesthesiologists in the accurate evaluation of the DoA.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7923817PMC
http://dx.doi.org/10.1186/s12871-021-01285-xDOI Listing

Publication Analysis

Top Keywords

deep neural
28
neural network
28
anaesthesia states
16
distinguishing anaesthesia
12
logistic regression
12
heart rate
8
features based
8
based deep
8
hrv-derived features
8
support vector
8

Similar Publications

In Vivo Confocal Microscopy for Automated Detection of Meibomian Gland Dysfunction: A Study Based on Deep Convolutional Neural Networks.

J Imaging Inform Med

January 2025

Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Eye Disease, Shanghai, 200080, China.

The objectives of this study are to construct a deep convolutional neural network (DCNN) model to diagnose and classify meibomian gland dysfunction (MGD) based on the in vivo confocal microscope (IVCM) images and to evaluate the performance of the DCNN model and its auxiliary significance for clinical diagnosis and treatment. We extracted 6643 IVCM images from the three hospitals' IVCM database as the training set for the DCNN model and 1661 IVCM images from the other two hospitals' IVCM database as the test set to examine the performance of the model. Construction of the DCNN model was performed using DenseNet-169.

View Article and Find Full Text PDF

Multi-class Classification of Retinal Eye Diseases from Ophthalmoscopy Images Using Transfer Learning-Based Vision Transformers.

J Imaging Inform Med

January 2025

College of Engineering, Department of Computer Engineering, Koç University, Rumelifeneri Yolu, 34450, Sarıyer, Istanbul, Turkey.

This study explores a transfer learning approach with vision transformers (ViTs) and convolutional neural networks (CNNs) for classifying retinal diseases, specifically diabetic retinopathy, glaucoma, and cataracts, from ophthalmoscopy images. Using a balanced subset of 4217 images and ophthalmology-specific pretrained ViT backbones, this method demonstrates significant improvements in classification accuracy, offering potential for broader applications in medical imaging. Glaucoma, diabetic retinopathy, and cataracts are common eye diseases that can cause vision loss if not treated.

View Article and Find Full Text PDF

The problem of ground-level ozone (O) pollution has become a global environmental challenge with far-reaching impacts on public health and ecosystems. Effective control of ozone pollution still faces complex challenges from factors such as complex precursor interactions, variable meteorological conditions and atmospheric chemical processes. To address this problem, a convolutional neural network (CNN) model combining the improved particle swarm optimization (IPSO) algorithm and SHAP analysis, called SHAP-IPSO-CNN, is developed in this study, aiming to reveal the key factors affecting ground-level ozone pollution and their interaction mechanisms.

View Article and Find Full Text PDF

Multiple Myeloma (MM) is a cytogenetically heterogeneous clonal plasma cell proliferative disease whose diagnosis is supported by analyses on histological slides of bone marrow aspirate. In summary, experts use a labor-intensive methodology to compute the ratio between plasma cells and non-plasma cells. Therefore, the key aspect of the methodology is identifying these cells, which relies on the experts' attention and experience.

View Article and Find Full Text PDF

Transcranial magnetic stimulation (TMS) has the potential to yield insights into cortical functions and improve the treatment of neurological and psychiatric conditions. However, its reliability is hindered by a low reproducibility of results. Among other factors, such low reproducibility is due to structural and functional variability between individual brains.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!