Listening speech sounds activates motor and premotor areas in addition to temporal and parietal brain regions. These activations are somatotopically localized according to the effectors recruited in the production of particular phonemes. Previous work demonstrated that transcranial magnetic stimulation (TMS) of speech motor centers somatotopically altered speech perception, suggesting a role for the motor system. However, these effects seemed to occur only under adverse listening conditions, suggesting that degraded speech may stimulate listeners to adopt unnatural neural strategies relying on motor centers. Here, we investigated whether naturally occurring interspeaker variability, which did not affect task difficulty, made a speech discrimination task sensitive to TMS interference. In this paradigm, TMS over tongue and lips motor representations somatotopically altered the discrimination time of speech. Furthermore, the TMS-induced effect correlated with listeners' similarity judgments between listeners' and speakers' speech productions. Thus, the degree of motor recruitment depends on the perceived distance between listener and speaker. This result supports the claim that discriminating others' speech pattern requires the contribution of the listener's own motor repertoire. We conclude that motor recruitment in speech perception can be a natural product of discriminating speech in a normally variable and unpredictable environment, not merely related to task difficulty.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1093/cercor/bht257 | DOI Listing |
J Speech Lang Hear Res
January 2025
School of Humanities, Shenzhen University, China.
Purpose: This study investigated the influence of vowel quality on loudness perception and stress judgment in Mongolian, an agglutinative language with free word stress. We aimed to explore the effects of intrinsic vowel features, presentation order, and intensity conditions on loudness perception and stress assignment.
Method: Eight Mongolian short vowel phonemes (/ɐ/, /ə/, /i/, /ɪ/, /ɔ/, /o/, /ʊ/, and /u/) were recorded by a native Mongolian speaker of the Urad subdialect (the Chahar dialect group) in Inner Mongolia.
J Speech Lang Hear Res
January 2025
Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China.
Purpose: Neurotypical individuals show a robust "global precedence effect (GPE)" when processing hierarchically structured visual information. However, the auditory domain remains understudied. The current research serves to fill the knowledge gap on auditory global-local processing across the broader autism phenotype under the tonal language background.
View Article and Find Full Text PDFCodas
January 2025
Departamento de Saúde Interdisciplinaridade e Reabilitação, Faculdade de Ciências Médicas, Universidade Estadual de Campinas - UNICAMP - Campinas (SP), Brasil.
Purpose: To verify possible correlations between fo and voice satisfaction among Brazilian transgender people.
Methods: An observational, cross-sectional quantitative study was conducted with the Trans Woman Voice Questionnaire (TWVQ), voice recording (sustained vowel and automatic speech) and extraction of seven acoustic measurements related to fo position and variability in transgender people. Participants were divided into two groups according to gender.
JASA Express Lett
January 2025
Department of Otolaryngology-Head and Neck Surgery, University of Washington, Seattle, Washington 98103, USA.
Pitch perception affects children's ability to perceive speech, appreciate music, and learn in noisy environments, such as their classrooms. Here, we investigated pitch perception for pure tones as well as resolved and unresolved complex tones with a fundamental frequency of 400 Hz in 8- to 11-year-old children and adults. Pitch perception in children was better for resolved relative to unresolved complex tones, consistent with adults.
View Article and Find Full Text PDFSci Rep
January 2025
RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Forskningsveien 3A, Oslo, 0373, Norway.
Periodic sensory inputs entrain oscillatory brain activity, reflecting a neural mechanism that might be fundamental to temporal prediction and perception. Most environmental rhythms and patterns in human behavior, such as walking, dancing, and speech do not, however, display strict isochrony but are instead quasi-periodic. Research has shown that neural tracking of speech is driven by modulations of the amplitude envelope, especially via sharp acoustic edges, which serve as prominent temporal landmarks.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!