Noise reduction (NR) systems are commonplace in modern digital hearing aids. Though not improving speech intelligibility, NR helps the hearing-aid user in terms of lowering noise annoyance, reducing cognitive load and improving ease of listening. Previous psychophysical work has shown that NR does in fact improve the ability of normal-hearing (NH) listeners to discriminate the slow amplitude-modulation (AM) cues representative of those found in speech. The goal of this study was to assess whether this improvement of AM discrimination with NR can also be observed for hearing-impaired (HI) listeners. AM discrimination was measured at two audio frequencies of 500 Hz and 2 kHz in a background noise with a signal-to-noise ratio of 12 dB. Discrimination was measured for ten HI and ten NH listeners with and without NR processing. The HI listeners had a moderate sensorineural hearing loss of about 50 dB HL at 2 kHz and normal hearing (≤ 20 dB HL) at 500 Hz. The results showed that most of the HI listeners tended to benefit from NR at 500 Hz but not at 2 kHz. However, statistical analyses showed that HI listeners did not benefit significantly from NR at any frequency region. In comparison, the NH listeners showed a significant benefit from NR at both frequencies. For each condition, the fidelity of AM transmission was quantified by a computational model of early auditory processing. The parameters of the model were adjusted separately for the two groups (NH and HI) of listeners. The AM discrimination performance of the HI group (with and without NR) was best captured by a model simulating the loss of the fast-acting amplitude compression applied by the normal cochlea. This suggests that the lack of benefit from NR for HI listeners results from loudness recruitment.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4164688 | PMC |
http://dx.doi.org/10.1007/s10162-014-0466-8 | DOI Listing |
Dig Dis Sci
January 2025
Division of Gastroenterology and Hepatology, Department of Medicine, Center for Esophageal Diseases and Swallowing, and Center for Gastrointestinal Biology and Disease, University of North Carolina School of Medicine, 130 Mason Farm Rd, Chapel Hill, North Carolina, USA.
Eur Arch Otorhinolaryngol
January 2025
Audio-vestibular Medicine unit, department of Ear, Nose and throat, Faculty of Medicine, Assiut University, Assiut, Egypt.
Background: Subjective tinnitus is characterized by perception of sound in the absence of any external or internal acoustic stimuli. Many approaches have been developed over the years to treat tinnitus (medical and nonmedical). However, no consensus has been reached on the optimal therapeutic approach.
View Article and Find Full Text PDFSci Rep
January 2025
Department of Psychology, New York University, New York, NY, USA.
Music can evoke powerful emotions in listeners. However, the role that instrumental music (music without any vocal part) plays in conveying extra-musical meaning, above and beyond emotions, is still a debated question. We conducted a study wherein participants (N = 121) listened to twenty 15-second-long excerpts of polyphonic instrumental soundtrack music and reported (i) perceived emotions (e.
View Article and Find Full Text PDFNat Commun
January 2025
The Faculty of Data and Decisions Sciences, Technion - Israel Institute of Technology, Haifa, Israel.
Large Language Models (LLMs) have shown success in predicting neural signals associated with narrative processing, but their approach to integrating context over large timescales differs fundamentally from that of the human brain. In this study, we show how the brain, unlike LLMs that process large text windows in parallel, integrates short-term and long-term contextual information through an incremental mechanism. Using fMRI data from 219 participants listening to spoken narratives, we first demonstrate that LLMs predict brain activity effectively only when using short contextual windows of up to a few dozen words.
View Article and Find Full Text PDFThis paper explores the perception of two diachronically related and mutually intelligible phonological oppositions, the onset voicing contrast of Northern Raglai and the register contrast of Southern Raglai. It is the continuation of a previous acoustic study that revealed that Northern Raglai onset stops maintain a voicing distinction accompanied by weak formant and voice quality modulations on following vowels, while Southern Raglai has transphonologized this voicing contrast into a register contrast marked by vowel and voice quality distinctions. Our findings indicate that the two dialects partially differ in their use of identification cues, Northern Raglai listeners using both voicing and F1 as major cues while Southern Raglai listeners largely focus on F1.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!