JASA Express Lett
December 2024
Nonlinguistic auditory abilities (e.g., stream segregation, musical perceptual abilities) are thought to contribute to speech perception in noise.
View Article and Find Full Text PDFMost auditory environments contain multiple sound waves that are mixed before reaching the ears. In such situations, listeners must disentangle individual sounds from the mixture, performing the auditory scene analysis. Analyzing complex auditory scenes relies on listeners ability to segregate acoustic events into different streams, and to selectively attend to the stream of interest.
View Article and Find Full Text PDFThe amplitude modulation following response (AMFR) is the steady-state auditory response signaling phase-locking to slow variations in the amplitude (AM) of auditory stimuli that provide fundamental acoustic information. From a developmental perspective, the AMFR has been recorded in sleeping infants, compared to sleeping or awake adults. The lack of AMFR recordings in awake infants limits conclusions on the development of phase-locking to AM.
View Article and Find Full Text PDFCued Speech (CS) is a communication system that uses manual gestures to facilitate lipreading. In this study, we investigated how CS information interacts with natural speech using Event-Related Potential (ERP) analyses in French-speaking, typically hearing adults (TH) who were either naïve or experienced CS producers. The audiovisual (AV) presentation of lipreading information elicited an amplitude attenuation of the entire N1 and P2 complex in both groups, accompanied by N1 latency facilitation in the group of CS producers.
View Article and Find Full Text PDFObjective assessment of auditory discrimination has often been measured using the Auditory Change Complex (ACC), which is a cortically generated potential elicited by a change occurring within an ongoing, long-duration auditory stimulus. In cochlear implant users, the electrically-evoked ACC has been used to measure electrode discrimination by changing the stimulating electrode during stimulus presentation. In addition to this cortical component, subcortical measures provide further information about early auditory processing in both normal hearing listeners and cochlear implant users.
View Article and Find Full Text PDFPsychophysical thresholds were measured for 8-16 year-old children with mild-to-moderate sensorineural hearing loss (MMHL; N = 46) on a battery of auditory processing tasks that included measures designed to be dependent upon frequency selectivity and sensitivity to temporal fine structure (TFS) or envelope cues. Children with MMHL who wore hearing aids were tested in both unaided and aided conditions, and all were compared to a group of normally hearing (NH) age-matched controls. Children with MMHL performed more poorly than NH controls on tasks considered to be dependent upon frequency selectivity, sensitivity to TFS, and speech discrimination (/bɑ/-/dɑ/), but not on tasks measuring sensitivity to envelope cues.
View Article and Find Full Text PDFAuditory deprivation in the form of deafness during development leads to lasting changes in central auditory system function. However, less is known about the effects of mild-to-moderate sensorineural hearing loss (MMHL) during development. Here, we used a longitudinal design to examine late auditory evoked responses and mismatch responses to nonspeech and speech sounds for children with MMHL.
View Article and Find Full Text PDFObjectives: This study aimed to evaluate the informational component of speech-on-speech masking. Speech perception in the presence of a competing talker involves not only informational masking (IM) but also a number of masking processes involving interaction of masker and target energy in the auditory periphery. Such peripherally generated masking can be eliminated by presenting the target and masker in opposite ears (dichotically).
View Article and Find Full Text PDFNoise typically induces both peripheral and central masking of an auditory target. Whereas the idea that a deficit of speech in noise perception is inherent to dyslexia is still debated, most studies have actually focused on the peripheral contribution to the dyslexics' difficulties of perceiving speech in noise. Here, we investigated the respective contribution of both peripheral and central noise in three groups of children: dyslexic, chronological age matched controls (CA), and reading-level matched controls (RL).
View Article and Find Full Text PDFPurpose: Children with dyslexia have been suggested to experience deficits in both categorical perception (CP) and speech identification in noise (SIN) perception. However, results regarding both abilities are inconsistent, and the relationship between them is still unclear. Therefore, this study aimed to investigate the relationship between CP and the psychometric function of SIN perception.
View Article and Find Full Text PDFStudies evaluating speech perception in noise have reported inconsistent results regarding a potential deficit in dyslexic children. So far, most of them investigated energetic masking. The present study evaluated situations inducing mostly informational masking, which reflects cognitive interference induced by the masker.
View Article and Find Full Text PDFIn complex auditory scenes, perceiving a given target signal is often complicated by the presence of competing maskers. In addition to energetic masking (EM), which arises because of peripheral interferences between target and maskers at the cochlear level, informational masking (IM), which takes place at a more central level, is also responsible for the difficulties encountered in typical ecological auditory environments. While recent research has led to mixed results regarding a potential speech-perception-in-noise deficit in dyslexic children, most of them actually investigated EM situations.
View Article and Find Full Text PDFObjectives: Interference between a target and simultaneous maskers occurs both at the cochlear level through energetic masking and more centrally through informational masking (IM). Hence, quantifying the amount of IM requires a strict control of the energetic component. Presenting target and maskers on different sides (i.
View Article and Find Full Text PDFThe goal of noise reduction (NR) algorithms in digital hearing aid devices is to reduce background noise whilst preserving as much of the original signal as possible. These algorithms may increase the signal-to-noise ratio (SNR) in an ideal case, but they generally fail to improve speech intelligibility. However, due to the complex nature of speech, it is difficult to disentangle the numerous low- and high-level effects of NR that may underlie the lack of speech perception benefits.
View Article and Find Full Text PDF