Background: In Japan, individuals with mild COVID-19 illness previously required to be monitored in designated areas and were hospitalized only if their condition worsened to moderate illness or worse. Daily monitoring using a pulse oximeter was a crucial indicator for hospitalization. However, a drastic increase in the number of patients resulted in a shortage of pulse oximeters for monitoring. Therefore, an alternative and cost-effective method for monitoring patients with mild illness was required. Previous studies have shown that voice biomarkers for Parkinson disease or Alzheimer disease are useful for classifying or monitoring symptoms; thus, we tried to adapt voice biomarkers for classifying the severity of COVID-19 using a dynamic time warping (DTW) algorithm where voice wavelets can be treated as 2D features; the differences between wavelet features are calculated as scores.

Objective: This feasibility study aimed to test whether DTW-based indices can generate voice biomarkers for a binary classification model using COVID-19 patients' voices to distinguish moderate illness from mild illness at a significant level.

Methods: We conducted a cross-sectional study using voice samples of COVID-19 patients. Three kinds of long vowels were processed into 10-cycle waveforms with standardized power and time axes. The DTW-based indices were generated by all pairs of waveforms and tested with the Mann-Whitney test (α<.01) and verified with a linear discrimination analysis and confusion matrix to determine which indices were better for binary classification of disease severity. A binary classification model was generated based on a generalized linear model (GLM) using the most promising indices as predictors. The receiver operating characteristic curve/area under the curve (ROC/AUC) validated the model performance, and the confusion matrix calculated the model accuracy.

Results: Participants in this study (n=295) were infected with COVID-19 between June 2021 and March 2022, were aged 20 years or older, and recuperated in Kanagawa prefecture. Voice samples (n=110) were selected from the participants' attribution matrix based on age group, sex, time of infection, and whether they had mild illness (n=61) or moderate illness (n=49). The DTW-based variance indices were found to be significant (<.001, except for 1 of 6 indices), with a balanced accuracy in the range between 79% and 88.6% for the /a/, /e/, and /u/ vowel sounds. The GLM achieved a high balance accuracy of 86.3% (for /a/), 80.2% (for /e/), and 88% (for /u/) and ROC/AUC of 94.8% (95% CI 90.6%-94.8%) for /a/, 86.5% (95% CI 79.8%-86.5%) for /e/, and 95.6% (95% CI 92.1%-95.6%) for /u/.

Conclusions: The proposed model can be a voice biomarker for an alternative and cost-effective method of monitoring the progress of COVID-19 patients in care.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10631492PMC
http://dx.doi.org/10.2196/50924DOI Listing

Publication Analysis

Top Keywords

voice biomarkers
16
dynamic time
8
cross-sectional study
8
illness required
8
moderate illness
8
mild illness
8
dtw-based indices
8
voice
6
covid-19
5
illness
5

Similar Publications

Unraveling the associations between voice pitch and major depressive disorder: a multisite genetic study.

Mol Psychiatry

December 2024

Department of Psychiatry and Biobehavioral Sciences, Brain Research Institute, University of California Los Angeles, Los Angeles, CA, USA.

Major depressive disorder (MDD) often goes undiagnosed due to the absence of clear biomarkers. We sought to identify voice biomarkers for MDD and separate biomarkers indicative of MDD predisposition from biomarkers reflecting current depressive symptoms. Using a two-stage meta-analytic design to remove confounds, we tested the association between features representing vocal pitch and MDD in a multisite case-control cohort study of Chinese women with recurrent depression.

View Article and Find Full Text PDF

Harmonic-to-noise ratio as speech biomarker for fatigue: K-nearest neighbour machine learning algorithm.

Med J Armed Forces India

December 2024

Associate Professor, Dayanand Sagar Univerity, Bengaluru, India.

Background: Vital information about a person's physical and emotional health can be perceived in their voice. After sleep loss, altered voice quality is noticed. The circadian rhythm controls the sleep cycle, and when it is askew, it results in fatigue, which is manifested in speech.

View Article and Find Full Text PDF

Introduction: The clinical, research and advocacy communities for Rett syndrome are striving to achieve clinical trial readiness, including having fit-for-purpose clinical outcome assessments. This study aimed to (1) describe psychometric properties of clinical outcome assessment for Rett syndrome and (2) identify what is needed to ensure that fit-for-purpose clinical outcome assessments are available for clinical trials.

Methods: Clinical outcome assessments for the top 10 priority domains identified in the Voice of the Patient Report for Rett syndrome were compiled and available psychometric data were extracted.

View Article and Find Full Text PDF

Introduction: The 2024 Voice AI Symposium, hosted by the Bridge2AI-Voice Consortium in Tampa, FL, featured two keynote speeches that addressed the intersection of voice AI, healthcare, ethics, and law. Dr. Rupal Patel and Dr.

View Article and Find Full Text PDF

Unlabelled: Recent research on schizophrenia seeks to identify objective biomarkers of the disease. The voice, and in particular the fundamental frequency (F0), could be one of them.

Methodology: We conducted a cross-sectional and descriptive study with a sample of 154 people.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!