It is known that deaf individuals usually outperform normal hearing subjects in speechreading; however, the underlying reasons remain unclear. In the present study, speechreading performance was assessed in normal hearing participants (NH), deaf participants who had been exposed to the Cued Speech (CS) system early and intensively, and deaf participants exposed to oral language without Cued Speech (NCS). Results show a gradation in performance with highest performance in CS, then in NCS, and finally NH participants. Moreover, error analysis suggests that speechreading processing is more accurate in the CS group than in the other groups. Given that early and intensive CS has been shown to promote development of accurate phonological processing, we propose that the higher speechreading results in Cued Speech users are linked to a better capacity in phonological decoding of visual articulators.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1111/j.1467-9450.2011.00919.x | DOI Listing |
Can one shift attention among voices at a cocktail party during a silent pause? Researchers have required participants to attend to one of two simultaneous voices - cued by its gender or location. Switching the target gender or location has resulted in a performance 'switch cost' - which was recently shown to reduce with preparation when a gender cue was presented in advance. The current study asks if preparation for a switch is also effective when a voice is selected by location.
View Article and Find Full Text PDFJ Psycholinguist Res
January 2025
Department of Linguistics, University of Potsdam, Potsdam, Germany.
Rhythm perception in speech and non-speech acoustic stimuli has been shown to be affected by general acoustic biases as well as by phonological properties of the native language of the listener. The present paper extends the cross-linguistic approach in this field by testing the application of the iambic-trochaic law as an assumed general acoustic bias on rhythmic grouping of non-speech stimuli by speakers of three languages: Arabic, Hebrew and German. These languages were chosen due to relevant differences in their phonological properties on the lexical level alongside similarities on the phrasal level.
View Article and Find Full Text PDFPLoS One
December 2024
Rotman Research Institute, Baycrest, Toronto, Ontario, Canada.
Cochlear implantation is a well-established method for restoring hearing sensation in individuals with severe to profound hearing loss. It significantly improves verbal communication for many users, despite substantial variability in patients' reports and performance on speech perception tests and quality-of-life outcome measures. Such variability in outcome measures remains several years after implantation and could reflect difficulties in attentional regulation.
View Article and Find Full Text PDFJ Speech Lang Hear Res
January 2025
Université Libre de Bruxelles, Brussels, Belgium.
Purpose: The objective of the present study is to investigate nasal and oral vowel production in French-speaking children with cochlear implants (CIs) and children with typical hearing (TH). Vowel nasality relies primarily on acoustic cues that may be less effectively transmitted by the implant. The study investigates how children with CIs manage to produce these segments in French, a language with contrastive vowel nasalization.
View Article and Find Full Text PDFAm J Speech Lang Pathol
January 2025
Department of Speech and Hearing Science, College of Arts and Sciences, The Ohio State University, Columbus.
Purpose: In light of COVID-19, telepractice for speech therapy has been increasingly adopted. Telepractice promotes accessibility to therapy services for those in rural environments, lowers the frequency of missed appointments, and reduces the costs of rehabilitation. The efficacy of telepractice has been scarcely explored in the aphasia literature.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!