Monitoring voice pitch is a fine-tuned process in daily conversations as conveying accurately the linguistic and affective cues in a given utterance depends on the precise control of phonation and intonation. This monitoring is thought to depend on whether the error is treated as self-generated or externally-generated, resulting in either a correction or inflation of errors. The present study reports on two separate paradigms of adaptation to altered feedback to explore whether participants could behave in a more cohesive manner once the error is of comparable size perceptually. The vocal behavior of normal-hearing and fluent speakers was recorded in response to a personalized size of pitch shift versus a non-specific size, one semitone. The personalized size of shift was determined based on the just-noticeable difference in fundamental frequency (F0) of each participant's voice. Here we show that both tasks successfully demonstrated opposing responses to a constant and predictable F0 perturbation (on from the production onset) but these effects barely carried over once the feedback was back to normal, depicting a pattern that bears some resemblance to compensatory responses. Experiencing a F0 shift that is perceived as self-generated (because it was precisely just-noticeable) is not enough to force speakers to behave more consistently and more homogeneously in an opposing manner. On the contrary, our results suggest that the type of the response as well as the magnitude of the response do not depend in any trivial way on the sensitivity of participants to their own voice pitch. Based on this finding, we speculate that error correction could possibly occur even with a bionic ear, typically even when F0 cues are too subtle for cochlear implant users to detect accurately.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7544828 | PMC |
http://dx.doi.org/10.1038/s41598-020-73932-1 | DOI Listing |
Mol Psychiatry
December 2024
Department of Psychiatry and Biobehavioral Sciences, Brain Research Institute, University of California Los Angeles, Los Angeles, CA, USA.
Major depressive disorder (MDD) often goes undiagnosed due to the absence of clear biomarkers. We sought to identify voice biomarkers for MDD and separate biomarkers indicative of MDD predisposition from biomarkers reflecting current depressive symptoms. Using a two-stage meta-analytic design to remove confounds, we tested the association between features representing vocal pitch and MDD in a multisite case-control cohort study of Chinese women with recurrent depression.
View Article and Find Full Text PDFLin Chuang Er Bi Yan Hou Tou Jing Wai Ke Za Zhi
January 2025
Jjingjiang Medicine City Hospita(Shanghai Sixth People's Hospital Fujian.
Pitch abnormalities are a common manifestation of various voice disorders, with complex pathophysiological mechanisms involving changes in vocal fold tension, mass, and neuromuscular dysfunction of the larynx. This study aims to investigate the underlying physiological mechanisms of pitch-related disorders and explore diagnostic and therapeutic approaches, providing insights for clinical management.
View Article and Find Full Text PDFNetwork
December 2024
Department of Electronics and Communication Engineering, Dronacharya Group of Institutions, Greater Noida, UP, India.
Speaker verification in text-dependent scenarios is critical for high-security applications but faces challenges such as voice quality variations, linguistic diversity, and gender-related pitch differences, which affect authentication accuracy. This paper introduces a Gender-Aware Siamese-Triplet Network-Deep Neural Network (ST-DNN) architecture to address these challenges. The Gender-Aware Network utilizes Convolutional 2D layers with ReLU activation for initial feature extraction, followed by multi-fusion dense skip connections and batch normalization to integrate features across different depths, enhancing discrimination between male and female speakers.
View Article and Find Full Text PDFJ Speech Lang Hear Res
December 2024
University of California, San Francisco.
Purpose: We investigate the extent to which automated audiovisual metrics extracted during an affect production task show statistically significant differences between a cohort of children diagnosed with autism spectrum disorder (ASD) and typically developing controls.
Method: Forty children with ASD and 21 neurotypical controls interacted with a multimodal conversational platform with a virtual agent, Tina, who guided them through tasks prompting facial and vocal communication of four emotions-happy, angry, sad, and afraid-under conditions of high and low verbal and social cognitive task demands.
Results: Individuals with ASD exhibited greater standard deviation of the fundamental frequency of the voice with the minima and maxima of the pitch contour occurring at an earlier time point as compared to controls.
J Clin Med
November 2024
Department of Otorhinolaryngology and Head and Neck Surgery, Semmelweis University, Szigony u. 36, H-1083 Budapest, Hungary.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!