This study investigated whether a short intensive psychophysical auditory training program is associated with speech perception benefits and changes in cortical auditory evoked potentials (CAEPs) in adult cochlear implant (CI) users. Ten adult implant recipients trained approximately 7 hours on psychophysical tasks (Gap-in-Noise Detection, Frequency Discrimination, Spectral Rippled Noise [SRN], Iterated Rippled Noise, Temporal Modulation). Speech performance was assessed before and after training using Lexical Neighborhood Test (LNT) words in quiet and in eight-speaker babble. CAEPs evoked by a natural speech stimulus /baba/ with varying syllable stress were assessed pre- and post-training, in quiet and in noise. SRN psychophysical thresholds showed a significant improvement (78% on average) over the training period, but performance on other psychophysical tasks did not change. LNT scores in noise improved significantly post-training by 11% on average compared with three pretraining baseline measures. N1P2 amplitude changed post-training for /baba/ in quiet (p = 0.005, visit 3 pretraining versus visit 4 post-training). CAEP changes did not correlate with behavioral measures. CI recipients' clinical records indicated a plateau in speech perception performance prior to participation in the study. A short period of intensive psychophysical training produced small but significant gains in speech perception in noise and spectral discrimination ability. There remain questions about the most appropriate type of training and the duration or dosage of training that provides the most robust outcomes for adults with CIs.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4910571 | PMC |
http://dx.doi.org/10.1055/s-0035-1570335 | DOI Listing |
J Speech Lang Hear Res
January 2025
Department of Special Education, Central China Normal University, Wuhan.
Purpose: This cross-sectional study explored how the speechreading ability of adults with hearing impairment (HI) in China would affect their perception of the four Mandarin Chinese lexical tones: high (Tone 1), rising (Tone 2), falling-rising (Tone 3), and falling (Tone 4). We predicted that higher speechreading ability would result in better tone performance and that accuracy would vary among individual tones.
Method: A total of 136 young adults with HI (ages 18-25 years) in China participated in the study and completed Chinese speechreading and tone awareness tests.
J Speech Lang Hear Res
January 2025
Division of Speech Pathology and Audiology, Research Institute of Audiology and Speech Pathology, College of Natural Sciences, Hallym University, Chuncheon, South Korea.
Purpose: Tools that can reliably measure changes in the perception of tinnitus following interventions are lacking. The minimum masking level, defined as the lowest level at which tinnitus is completely masked, is a candidate for quantifying changes in tinnitus perception. In this study, we aimed to determine minimal clinically important differences for minimum masking level.
View Article and Find Full Text PDFElife
January 2025
State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University & IDG/McGovern Institute for Brain Research, Beijing, China.
Speech comprehension involves the dynamic interplay of multiple cognitive processes, from basic sound perception, to linguistic encoding, and finally to complex semantic-conceptual interpretations. How the brain handles the diverse streams of information processing remains poorly understood. Applying Hidden Markov Modeling to fMRI data obtained during spoken narrative comprehension, we reveal that the whole brain networks predominantly oscillate within a tripartite latent state space.
View Article and Find Full Text PDFJ Acoust Soc Am
January 2025
Department of Electronics Engineering, Pusan National University, Busan, South Korea.
The amount of information contained in speech signals is a fundamental concern of speech-based technologies and is particularly relevant in speech perception. Measuring the mutual information of actual speech signals is non-trivial, and quantitative measurements have not been extensively conducted to date. Recent advancements in machine learning have made it possible to directly measure mutual information using data.
View Article and Find Full Text PDFJ Acoust Soc Am
January 2025
Leiden University Centre for Linguistics, Leiden University, Leiden, The Netherlands.
Previous studies suggested that pitch characteristics of lexical tones in Standard Chinese influence various sensory perceptions, but whether they iconically bias emotional experience remained unclear. We analyzed the arousal and valence ratings of bi-syllabic words in two corpora (Study 1) and conducted an affect rating experiment using a carefully designed corpus of bi-syllabic words (Study 2). Two-alternative forced-choice tasks further tested the robustness of lexical tones' affective iconicity in an auditory nonce word context (Study 3).
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!