Audio-visual speech perception in noise: Implanted children and young adults versus normal hearing peers.

Int J Pediatr Otorhinolaryngol

Ariel University, Department of Communication Disorders, Israel. Electronic address:

Published: January 2017

Objective: The purpose of the current study was to evaluate auditory, visual and audiovisual speech perception abilities among two groups of cochlear implant (CI) users: prelingual children and long-term young adults, as compared to their normal hearing (NH) peers.

Methods: Prospective cohort study that included 50 participants, divided into two groups of CI (10 children and 10 adults), and two groups of normal hearing peers (15 participants each). Speech stimuli included monosyllabic meaningful and nonsense words in a signal to noise ratio of 0 dB. Speech stimuli were introduced via auditory, visual and audiovisual modalities.

Results: (1) CI children and adults show lower speech perception accuracy with background noise in audiovisual and auditory modalities, as compared to NH peers, but significantly higher visual speech perception scores. (2) CI children are superior to CI adults in speech perception in noise via auditory modality, but inferior in the visual one. Both CI children and CI adults had similar audiovisual integration.

Conclusions: The findings of the current study show that in spite of the fact that the CI children were implanted bilaterally, at a very young age, and using advanced technology, they still have difficulties in perceiving speech in adverse listening conditions even when adding the visual modality. This suggests that adding audiovisual training might be beneficial for this group by improving their audiovisual integration in difficult listening situations.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.ijporl.2016.11.022DOI Listing

Publication Analysis

Top Keywords

speech perception
20
normal hearing
12
children adults
12
perception noise
8
young adults
8
hearing peers
8
current study
8
auditory visual
8
visual audiovisual
8
speech stimuli
8

Similar Publications

Newborns are able to neurally discriminate between speech and nonspeech right after birth. To date it remains unknown whether this early speech discrimination and the underlying neural language network is associated with later language development. Preterm-born children are an interesting cohort to investigate this relationship, as previous studies have shown that preterm-born neonates exhibit alterations of speech processing and have a greater risk of later language deficits.

View Article and Find Full Text PDF

Intonation adaptation to multiple talkers.

J Exp Psychol Learn Mem Cogn

December 2024

University at Buffalo, The State University of New York, Department of Psychology.

Speech intonation conveys a wealth of linguistic and social information, such as the intention to ask a question versus make a statement. However, due to the considerable variability in our speaking voices, the mapping from meaning to intonation can be many-to-many and often ambiguous. Previous studies suggest that the comprehension system resolves this ambiguity, at least in part, by adapting to recent exposure.

View Article and Find Full Text PDF

Listeners can use both lexical context (i.e., lexical knowledge activated by the word itself) and lexical predictions based on the content of a preceding sentence to adjust their phonetic categories to speaker idiosyncrasies.

View Article and Find Full Text PDF

The goal of the present investigation was to perform a registered replication of Jones and Macken's (1995b) study, which showed that the segregation of a sequence of sounds to distinct locations reduced the disruptive effect on serial recall. Thereby, it postulated an intriguing connection between auditory stream segregation and the cognitive mechanisms underlying the irrelevant speech effect. Specifically, it was found that a sequence of changing utterances was less disruptive in stereophonic presentation, allowing each auditory object (letters) to be allocated to a unique location (right ear, left ear, center), compared to when the same sounds were played monophonically.

View Article and Find Full Text PDF

Can one shift attention among voices at a cocktail party during a silent pause? Researchers have required participants to attend to one of two simultaneous voices - cued by its gender or location. Switching the target gender or location has resulted in a performance 'switch cost' - which was recently shown to reduce with preparation when a gender cue was presented in advance. The current study asks if preparation for a switch is also effective when a voice is selected by location.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!