Auditory nerve single-unit population studies have demonstrated that phase-locking plays a dominant role in the neural encoding of both the spectrum and voice pitch of speech sounds. Phase-locked neural activity underlying the scalp-recorded human frequency-following response (FFR) has also been shown to encode certain spectral features of steady-state and time-variant speech sounds as well as pitch of several complex sounds that produce time-invariant pitch percepts. By extension, it was hypothesized that the human FFR may preserve pitch-relevant information for speech sounds that elicit time-variant as well as steady-state pitch percepts. FFRs were elicited in response to the four lexical tones of Mandarin Chinese as well as to a complex auditory stimulus which was spectrally different but equivalent in fundamental frequency (f0) contour to one of the Chinese tones. Autocorrelation-based pitch extraction measures revealed that the FFR does indeed preserve pitch-relevant information for all stimuli. Phase-locked interpeak intervals closely followed f0. Spectrally different stimuli that were equivalent in F0 similarly showed robust interpeak intervals that followed f0. These FFR findings support the viability of early, population-based 'predominant interval' representations of pitch in the auditory brainstem that are based on temporal patterns of phase-locked neural activity.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/S0378-5955(03)00402-7 | DOI Listing |
Sci Rep
March 2025
Basque Center on Cognition, Brain and Language, Paseo Mikeletegi 69, Donostia-San Sebastián, 20009, Spain.
Learning to read affects speech perception. For example, the ability of listeners to recognize consistently spelled words faster than inconsistently spelled words is a robust finding called the Orthographic Consistency Effect (OCE). Previous studies located the OCE at the rime level and focused on languages with opaque orthographies.
View Article and Find Full Text PDFDev Sci
May 2025
Center for Childhood Deafness, Language, and Learning, Boys Town National Research Hospital, Omaha, Nebraska, USA.
Recent studies indicate children who are deaf and hard of hearing who use cochlear implants or hearing aids know fewer spoken words than their peers with typical hearing, and often those vocabularies differ in composition. To date, however, the interaction of a child's auditory profile with the lexical characteristics of words he or she knows has been minimally explored. The purpose of the present study is to evaluate how audiological history, phonological memory, and overall vocabulary knowledge interact with growth in types of spoken words known by children who are deaf and hard of hearing compared to children with typical hearing.
View Article and Find Full Text PDFBrain Res
March 2025
Department of Speech-Language Pathology, Federal University of Paraiba, João Pessoa, PB 58051-900, Brazil.
Unlabelled: Functional near-infrared spectroscopy (fNIRS) estimates the cortical hemodynamic response induced by sound stimuli. fNIRS can be used to understand the symptomatology of tinnitus and consequently provide effective ways of evaluating and treating the symptom.
Objective: Compare the changes in the oxy-hemoglobin and deoxy-hemoglobin concentration of individuals with and without tinnitus using auditory stimulation by fNIRS.
Neuroimage
March 2025
Inkendaal Rehabilitation Hospital, Vlezenbeek, Belgium; Université libre de Bruxelles (ULB), Faculty of Psychology, Educational Sciences and Speech and Language therapy, Brussels, Belgium.
Maturation of the auditory system in early childhood significantly influences the development of language-related perceptual and cognitive abilities. This study aims to provide insights into the neurophysiological changes underlying auditory processing and speech-sound discrimination in the first two years of life. We conducted a study using high-density electroencephalography (EEG) to longitudinally record cortical auditory event-related potentials (CAEP) in response to synthesized syllable sounds with pitch/duration change in a cohort of 79 extremely and very preterm-born infants without developmental disorders.
View Article and Find Full Text PDFJ Acoust Soc Am
March 2025
Department of Communicative Disorders and Deaf Education, Utah State University, Logan, Utah 84322, USA.
Listeners can adapt to noise-vocoded speech under divided attention using a dual task design [Wang, Chen, Yan, McGettigan, Rosen, and Adank, Trends Hear. 27, 23312165231192297 (2023)]. Adaptation to noise-vocoded speech, an artificial degradation, was largely unaffected for domain-general (visuomotor) and domain-specific (semantic or phonological) dual tasks.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!