Impedance pneumography has been suggested as an ambulatory technique for the monitoring of respiratory diseases. However, its ambulatory nature makes the recordings more prone to noise sources. It is important that such noisy segments are identified and removed, since they could have a huge impact on the performance of data-driven decision support tools. In this study, we investigated the added value of machine learning algorithms to separate clean from noisy bio-impedance signals. We compared three approaches: a heuristic algorithm, a feature-based classification model (SVM) and a convolutional neural network (CNN). The dataset consists of 47 chronic obstructive pulmonary disease patients who performed an inspiratory threshold loading protocol. During this protocol, their respiration was recorded with a bio-impedance device and a spirometer, which served as a gold standard. Four annotators scored the signals for the presence of artefacts, based on the reference signal. We have shown that the accuracy of both machine learning approaches (SVM: 87.77 ± 2.64% and CNN: 87.20 ± 2.78%) is significantly higher, compared to the heuristic approach (84.69 ± 2.32%). Moreover, no significant differences could be observed between the two machine learning approaches. The feature-based and neural network model obtained a respective AUC of 92.77±2.95% and 92.51±1.74%. These findings show that a data-driven approach could be beneficial for the task of artefact detection in respiratory thoracic bio-impedance signals.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8068282PMC
http://dx.doi.org/10.3390/s21082613DOI Listing

Publication Analysis

Top Keywords

machine learning
16
artefact detection
8
impedance pneumography
8
bio-impedance signals
8
neural network
8
learning approaches
8
detection impedance
4
signals
4
pneumography signals
4
machine
4

Similar Publications

Objective: The aim of this study was to develop and validate predictive models for perineural invasion (PNI) in gastric cancer (GC) using clinical factors and radiomics features derived from contrast-enhanced computed tomography (CE-CT) scans and to compare the performance of these models.

Methods: This study included 205 GC patients, who were randomly divided into a training set (n=143) and a validation set (n=62) in a 7:3 ratio. Optimal radiomics features were selected using the least absolute shrinkage and selection operator (LASSO) algorithm.

View Article and Find Full Text PDF

Objective: To assess performance of an algorithm for automated grading of surgery-related adverse events (AEs) according to Clavien-Dindo (C-D) classification.

Summary Background Data: Surgery-related AEs are common, lead to increased morbidity for patients, and raise healthcare costs. Resource-intensive manual chart review is still standard and to our knowledge algorithms using electronic health record (EHR) data to grade AEs according to C-D classification have not been explored.

View Article and Find Full Text PDF

Background: Distinctive heterogeneity characterizes diffuse large B-cell lymphoma (DLBCL), one of the most frequent types of non-Hodgkin's lymphoma. Mitochondria have been demonstrated to be closely involved in tumorigenesis and progression, particularly in DLBCL.

Objective: The purposes of this study were to identify the prognostic mitochondria-related genes (MRGs) in DLBCL, and to develop a risk model based on MRGs and machine learning algorithms.

View Article and Find Full Text PDF

Introduction: This study aimed to identify cognitive tests that optimally relate to tau positron emission tomography (PET) signal in the inferior temporal cortex (ITC), a neocortical region associated with early tau accumulation in Alzheimer's disease (AD).

Methods: We analyzed cross-sectional data from the harvard aging brain study (HABS) (= 128) and the Anti-Amyloid Treatment in Asymptomatic Alzheimer's (A4) study (= 393). We used elastic net regression to identify the most robust cognitive correlates of tau PET signal in the ITC.

View Article and Find Full Text PDF

Emerging trends in the optimization of organic synthesis through high-throughput tools and machine learning.

Beilstein J Org Chem

January 2025

Institute of Materials Research and Engineering (IMRE), Agency for Science Technology and Research (A*STAR), 2 Fusionopolis Way, Singapore 138634, Republic of Singapore.

The discovery of the optimal conditions for chemical reactions is a labor-intensive, time-consuming task that requires exploring a high-dimensional parametric space. Historically, the optimization of chemical reactions has been performed by manual experimentation guided by human intuition and through the design of experiments where reaction variables are modified one at a time to find the optimal conditions for a specific reaction outcome. Recently, a paradigm change in chemical reaction optimization has been enabled by advances in lab automation and the introduction of machine learning algorithms.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!