Many studies revealed a link between temporal information processing (TIP) in a millisecond range and speech perception. Previous studies indicated a dysfunction in TIP accompanied by deficient phonemic hearing in children with specific language impairment (SLI). In this study we concentrate in SLI on phonetic identification, using the voice-onset-time (VOT) phenomenon in which TIP is built-in. VOT is crucial for speech perception, as stop consonants (like /t/ vs. /d/) may be distinguished by an acoustic difference in time between the onsets of the consonant (stop release burst) and the following vibration of vocal folds (voicing). In healthy subjects two categories (voiced and unvoiced) are determined using VOT task. The present study aimed at verifying whether children with SLI indicate a similar pattern of phonetic identification as their healthy peers and whether the intervention based on TIP results in improved performance on the VOT task. Children aged from 5 to 8 years ( = 47) were assigned into two groups: normal children without any language disability (NC, = 20), and children with SLI ( = 27). In the latter group participants were randomly classified into two treatment subgroups, i.e., experimental temporal training (EG, = 14) and control non-temporal training (CG, = 13). The analyzed indicators of phonetic identification were: (1) the boundary location (α) determined as the VOT value corresponding to 50% voicing/unvoicing distinctions; (2) ranges of voiced/unvoiced categories; (3) the slope of identification curve (β) reflecting the identification correctness; (4) percent of voiced distinctions within the applied VOT spectrum. The results indicated similar α values and similar ranges of voiced/unvoiced categories between SLI and NC. However, β in SLI was significantly higher than that in NC. After the intervention, the significant improvement of β was observed only in EG. They achieved the level of performance comparable to that observed in NC. The training-related improvement in CG was non-significant. Furthermore, only in EG the β values in post-test correlated with measures of TIP as well as with phonemic hearing obtained in our previous studies. These findings provide another evidence that TIP is omnipresent in language communication and reflected not only in phonemic hearing but also in phonetic identification.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5998645 | PMC |
http://dx.doi.org/10.3389/fnhum.2018.00213 | DOI Listing |
Dyslexia
February 2025
Edmond J. Safra Brain Research Center for the Study of Learning Disabilities, Department of Learning Disabilities, University of Haifa, Haifa, Israel.
While the multiple cognitive deficits model of reading difficulties (RD) is widely supported, different cognitive-linguistic deficits may manifest differently depending on language and writing system characteristics. This study examined cognitive-linguistic profiles underlying RD in Hebrew, characterised by rich Semitic morphology and two writing versions differing in orthographic consistency-a transparent-pointed version and a deep-unpointed version. A two-step cluster analysis grouped 96 s graders and 81 fourth graders based on their phonological awareness (PA), rapid naming (RAN), orthographic knowledge (OK) and morphological-pattern identification (MPI) abilities.
View Article and Find Full Text PDFJ Acoust Soc Am
December 2024
Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Boston, Massachusetts 02114, USA.
Identification and quantification of speech variations in velar production across various phonological environments have always been an interesting topic in speech motor control studies. Dynamic magnetic resonance imaging has become a favorable tool for visualizing articulatory deformations and providing quantitative insights into speech activities over time. Based on this modality, it is proposed to employ a workflow of image analysis techniques to uncover potential deformation variations in the human tongue caused by changes in phonological environments by altering the placement of velar consonants in utterances.
View Article and Find Full Text PDFJ Acoust Soc Am
December 2024
School of Chinese Language and Literature, Beijing Normal University, Beijing 100875, China.
This study examines whether cue integration in tone perception undergoes changes caused by disparities in language experience among two groups of multidialectal speakers from Changsha: participants in the dialect-preserving group speak Changsha dialect (CD), Changsha Plastic Mandarin (CPM), and Standard Mandarin (SM), whereas participants in the dialect-lost group speak CPM and SM but not CD. An identification test on T1 and T4 was conducted, both of which are present in the CD and CPM. T1 and T4 are associated with a high pitch, but they differ in pitch height, pitch contour, and voice quality.
View Article and Find Full Text PDFJ Speech Lang Hear Res
December 2024
Department of Communication Sciences and Disorders, East Carolina University, Greenville, NC.
Purpose: This study examined the race identification of Southern American English speakers from two geographically distant regions in North Carolina. The purpose of this work is to explore how talkers' self-identified race, talker dialect region, and acoustic speech variables contribute to listener categorization of talker races.
Method: Two groups of listeners heard a series of /h/-vowel-/d/ (/hVd/) words produced by Black and White talkers from East and West North Carolina, respectively.
J Acoust Soc Am
November 2024
Aix Marseille Université, CNRS, LPL, 13 100 Aix-en-Provence, France.
Accentuation is encoded by both durational and pitch cues in French. While previous research agrees that the sole presence of pitch cues is sufficient to encode accentuation in French, the role of durational cues is less clear. In four cue-weighting accent perception experiments, we examined the role of pitch and durational cues in French listeners' perception of accentuation.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!