Temporal-envelope cues are essential for successful speech perception. We asked here whether training on stimuli containing temporal-envelope cues without speech content can improve the perception of spectrally-degraded (vocoded) speech in which the temporal-envelope (but not the temporal fine structure) is mainly preserved. Two groups of listeners were trained on different amplitude-modulation (AM) based tasks, either AM detection or AM-rate discrimination (21 blocks of 60 trials during two days, 1260 trials; frequency range: 4Hz, 8Hz, and 16Hz), while an additional control group did not undertake any training. Consonant identification in vocoded vowel-consonant-vowel stimuli was tested before and after training on the AM tasks (or at an equivalent time interval for the control group). Following training, only the trained groups showed a significant improvement in the perception of vocoded speech, but the improvement did not significantly differ from that observed for controls. Thus, we do not find convincing evidence that this amount of training with temporal-envelope cues without speech content provide significant benefit for vocoded speech intelligibility. Alternative training regimens using vocoded speech along the linguistic hierarchy should be explored.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6934405 | PMC |
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0226288 | PLOS |
Sci Rep
December 2024
Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 1st Floor, 8-11 Queen Square, London, WC1N 3AR, UK.
Previous research suggests that emotional prosody perception is impaired in neurodegenerative diseases like Alzheimer's disease (AD) and primary progressive aphasia (PPA). However, no previous research has investigated emotional prosody perception in these diseases under non-ideal listening conditions. We recruited 18 patients with AD, and 31 with PPA (nine logopenic (lvPPA); 11 nonfluent/agrammatic (nfvPPA) and 11 semantic (svPPA)), together with 24 healthy age-matched individuals.
View Article and Find Full Text PDFJ Acoust Soc Am
November 2024
Department of Electrical Engineering, The University of Texas at Dallas, Richardson, Texas 75080, USA.
Because a reference signal is often unavailable in real-world scenarios, reference-free speech quality and intelligibility assessment models are important for many speech processing applications. Despite a great number of deep-learning models that have been applied to build non-intrusive speech assessment approaches and achieve promising performance, studies focusing on the hearing impaired (HI) subjects are limited. This paper presents HASA-Net+, a multi-objective non-intrusive hearing-aid speech assessment model, building upon our previous work, HASA-Net.
View Article and Find Full Text PDFEur J Neurosci
December 2024
Department of Psychology, University of Helsinki, Helsinki, Finland.
When performing cognitive tasks in noisy conditions, the brain needs to maintain task performance while additionally controlling the processing of task-irrelevant and potentially distracting auditory stimuli. Previous research indicates that a fundamental mechanism by which this control is achieved is the attenuation of task-irrelevant processing, especially in conditions with high task demands. However, it remains unclear whether the processing of complex naturalistic sounds can be modulated as easily as that of simpler ones.
View Article and Find Full Text PDFSci Rep
November 2024
Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, 37007, Salamanca, Spain.
Understanding speech in noisy settings is harder for hearing-impaired (HI) people than for normal-hearing (NH) people, even when speech is audible. This is often attributed to hearing loss altering the neural encoding of temporal and/or spectral speech cues. Here, we investigated whether this difference may also be due to an impaired ability to adapt to background noise.
View Article and Find Full Text PDFBrain Behav
November 2024
Department of Audiology, Faculty of Health Science, Hacettepe University, Ankara, Turkey.
Background: It is still not fully explained what kind of cognitive sources the methods used in the assessment of listening effort are more sensitive to and how these measurement results are related to each other. The aim of the study is to ascertain which neural resources crucial for listening effort are most sensitive to objective measurement methods using differently degraded speech stimuli.
Methods: A total of 49 individuals between the ages of 19 and 34 with normal hearing participated in the study.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!