Existing literature has demonstrated that individuals with autism spectrum disorder (ASD) exhibit atypical use of contextual information in their surroundings. However, there is limited understanding regarding their integration of contextual cues in speech processing. This study aims to explore how Mandarin-speaking children with and without ASD identify lexical tones in speech and nonspeech contexts, and to determine whether the size of context effect would be modulated by children's cognitive abilities. Twenty-five children with ASD and 25 typically developing (TD) children were asked to identify Mandarin lexical tones preceded by three types of contexts (speech, nonspeech, and nonspeech-flattened contexts). We also tested child participants' verbal intelligence, nonverbal intelligence, and working memory capacity. Results revealed that the context effect was only observed in the speech contexts, where Mandarin-speaking children with ASD exhibited a reduced context effect compared to TD children. Moreover, TD children with higher verbal intelligence demonstrated a diminished context effect. However, nonverbal intelligence and working memory capacity were not significantly associated with the size of context effect in either group. These findings revealed a subtle yet important difference between ASD and TD children's utilization of speech contexts in lexical tone identification, and validated a speech-specific mechanism underpinning children's lexical tone normalization.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1007/s10803-025-06775-2 | DOI Listing |
J Autism Dev Disord
March 2025
School of Foreign Languages, Hunan University, Lushannan Road No. 2, Yuelu District, Changsha City, Hunan Province, China.
Existing literature has demonstrated that individuals with autism spectrum disorder (ASD) exhibit atypical use of contextual information in their surroundings. However, there is limited understanding regarding their integration of contextual cues in speech processing. This study aims to explore how Mandarin-speaking children with and without ASD identify lexical tones in speech and nonspeech contexts, and to determine whether the size of context effect would be modulated by children's cognitive abilities.
View Article and Find Full Text PDFPLoS One
February 2025
Department of Communicative Disorders, University of Alabama, Tuscaloosa, Alabama, United States of America.
Social Feedback Speech Technologies (SFST) are programs and devices, often "AI"-powered, that claim to provide users with feedback about how their speech sounds to other humans. To date, academic research has not focused on how such systems perform for a variety of speakers. In 2020, Amazon released a wearable called Halo, touting its fitness and sleep tracking, as well as its ability to evaluate the wearer's voice to help them "understand how they sound to others".
View Article and Find Full Text PDFJ Speech Lang Hear Res
March 2025
Institute of Linguistics, Academia Sinica, Taipei, Taiwan.
Purpose: Objective measures of auditory capacity in the hearing loss population are crucial for cross-checking behavioral measures. Mismatch negativity (MMN) is an auditory event-related potential component indexing automatic change detection and reflecting speech discrimination performance. MMN can potentially serve as an objective measure of speech discrimination.
View Article and Find Full Text PDFFront Psychol
January 2025
School of English and International Studies, Beijing Foreign Studies University, Beijing, China.
Introduction: This study investigates whether unfamiliar tone sandhi patterns in Tianjin Mandarin can be implicitly learned through an artificial language learning experiment, and if the acquired knowledge is rule-based and generalizable.
Methods: Participants were trained to learn monosyllabic words and disyllabic phrases with their attention focused on a word-order rule, while unknowingly being exposed to unfamiliar tone sandhi patterns. A judgement test with trial-to-trial confidence ratings was conducted to assess the learning outcomes and participants' awareness.
The ability to perceive pitch allows human listeners to experience music, recognize the identity and emotion conveyed by conversational partners, and make sense of their auditory environment. A pitch percept is formed by weighting different acoustic cues (e.g.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!