Background: Machine learning-based facial and vocal measurements have demonstrated relationships with schizophrenia diagnosis and severity. Demonstrating utility and validity of remote and automated assessments conducted outside of controlled experimental or clinical settings can facilitate scaling such measurement tools to aid in risk assessment and tracking of treatment response in populations that are difficult to engage.

Objective: This study aimed to determine the accuracy of machine learning-based facial and vocal measurements acquired through automated assessments conducted remotely through smartphones.

Methods: Measurements of facial and vocal characteristics including facial expressivity, vocal acoustics, and speech prevalence were assessed in 20 patients with schizophrenia over the course of 2 weeks in response to two classes of prompts previously utilized in experimental laboratory assessments: evoked prompts, where subjects are guided to produce specific facial expressions and speech; and spontaneous prompts, where subjects are presented stimuli in the form of emotionally evocative imagery and asked to freely respond. Facial and vocal measurements were assessed in relation to schizophrenia symptom severity using the Positive and Negative Syndrome Scale.

Results: Vocal markers including speech prevalence, vocal jitter, fundamental frequency, and vocal intensity demonstrated specificity as markers of negative symptom severity, while measurement of facial expressivity demonstrated itself as a robust marker of overall schizophrenia symptom severity.

Conclusions: Established facial and vocal measurements, collected remotely in schizophrenia patients via smartphones in response to automated task prompts, demonstrated accuracy as markers of schizophrenia symptom severity. Clinical implications are discussed.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8817208PMC
http://dx.doi.org/10.2196/26276DOI Listing

Publication Analysis

Top Keywords

facial vocal
24
vocal measurements
16
schizophrenia symptom
12
symptom severity
12
facial
9
vocal
9
vocal markers
8
markers schizophrenia
8
machine learning-based
8
learning-based facial
8

Similar Publications

Postauricular Approach for Enucleation of Cervical Vagal Schwannomas.

Head Neck

December 2024

Department of Otorhinolaryngology-Head and Neck Surgery, CHA Bundang Medical Center, CHA University, Seongnam, Republic of Korea.

Background: This study evaluates the outcomes of intracapsular enucleation using the retroauricular hairline incision (RAHI) approach for treating cervical vagal schwannomas.

Methods: A longitudinal study was conducted on patients with cervical vagal schwannomas. Eleven patients who underwent RAHI-based enucleation were included.

View Article and Find Full Text PDF

Background: Epiglottic masses are often asymptomatic, making them difficult to detect during preoperative examinations. Consequently, anesthesiologists may face ventilation difficulties with no apparent cause. Epiglottic masses can sometimes obstruct laryngoscope insertion into the epiglottic vallecula, complicating general anesthesia induction.

View Article and Find Full Text PDF

Objective: To compare the vocal symptomatology of professors from a federal university who engaged in distance, hybrid, and face-to-face teaching during and after the Coronavirus Disease 2019 (COVID-19) pandemic period.

Method: The study included 40 university professors, 20 men and 20 women, whose symptomatology was monitored at three time points: during the distance teaching period due to social isolation caused by COVID-19, in hybrid teaching (partial return), and upon returning to face-to-face teaching, which required the use of face masks and posed contamination risks.

Results: The hybrid phase presented the highest absence of vocal complaints/discomfort, and most participants did not need to be reassigned due to vocal problems.

View Article and Find Full Text PDF

Background: The order Rodentia is the largest group of mammals. Diversification of vocal communication has contributed to rodent radiation and allowed them to occupy diverse habitats and adopt different social systems. The mechanism by which efficient vocal sounds, which carry over surprisingly large distances, are generated is incompletely understood.

View Article and Find Full Text PDF

Purpose: We investigate the extent to which automated audiovisual metrics extracted during an affect production task show statistically significant differences between a cohort of children diagnosed with autism spectrum disorder (ASD) and typically developing controls.

Method: Forty children with ASD and 21 neurotypical controls interacted with a multimodal conversational platform with a virtual agent, Tina, who guided them through tasks prompting facial and vocal communication of four emotions-happy, angry, sad, and afraid-under conditions of high and low verbal and social cognitive task demands.

Results: Individuals with ASD exhibited greater standard deviation of the fundamental frequency of the voice with the minima and maxima of the pitch contour occurring at an earlier time point as compared to controls.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!