Objectives: To investigate the influence of gender on subcortical representation of speech acoustic parameters where simultaneously presented to both ears.
Methods: Two-channel speech-evoked auditory brainstem responses were obtained in 25 female and 23 male normal hearing young adults by using binaural presentation of the 40 ms synthetic consonant-vowel/da/, and the encoding of the fast and slow elements of speech stimuli at subcortical level were compared in the temporal and spectral domains between the sexes using independent sample, two tailed t-test.
Results: Highly detectable responses were established in both groups. Analysis in the time domain revealed earlier and larger Fast-onset-responses in females but there was no gender related difference in sustained segment and offset of the response. Interpeak intervals between Frequency Following Response peaks were also invariant to sex. Based on shorter onset responses in females, composite onset measures were also sex dependent. Analysis in the spectral domain showed more robust and better representation of fundamental frequency as well as the first formant and high frequency components of first formant in females than in males.
Conclusions: Anatomical, biological and biochemical distinctions between females and males could alter the neural encoding of the acoustic cues of speech stimuli at subcortical level. Females have an advantage in binaural processing of the slow and fast elements of speech. This could be a physiological evidence for better identification of speaker and emotional tone of voice, as well as better perceiving the phonetic information of speech in women.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.anl.2013.10.010 | DOI Listing |
Bioengineering (Basel)
November 2024
College of Engineering, Design and Physical Sciences, Brunel University London, London UB8 3PH, UK.
Attention is one of many human cognitive functions that are essential in everyday life. Given our limited processing capacity, attention helps us focus only on what matters. Focusing attention on one speaker in an environment with many speakers is a critical ability of the human auditory system.
View Article and Find Full Text PDFFront Hum Neurosci
December 2024
Ph.D. Program in Speech-Language-Hearing Sciences, The Graduate Center, The City University of New York Graduate Center, New York, NY, United States.
Introduction: Lateral temporal neural measures (Na and T-complex Ta and Tb) of the auditory evoked potential (AEP) index auditory/speech processing and have been observed in children and adults. While Na is already present in children under 4 years of age, Ta emerges from 4 years of age, and Tb appears even later. The T-complex has been found to be sensitive to language experience in Spanish-English and Turkish-German children and adults.
View Article and Find Full Text PDFInt J Pediatr Otorhinolaryngol
January 2025
Department of Audiology, Ankara Medipol University Faculty of Health Sciences, Ankara, Turkey. Electronic address:
Objectives: This study aims to evaluate musical pitch and timbre perception in children who stutter and compare the results with typically developing children.
Methods: A total of 50 participants were included in the study, consisting of 25 children with stuttering (mean age = 10.06 years; range 6-17 years) and 25 typically developing children (mean age = 10.
J Vis Exp
December 2024
Department of Otolaryngology, Head and Neck Surgery, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health;
Single-sided deafness (SSD), where there is severe to profound hearing loss in one ear and normal hearing in the other, is a prevalent auditory condition that significantly impacts the quality of life for those affected. The ability to accurately localize sound sources is crucial for various everyday activities, including speech communication and environmental awareness. In recent years, bone conduction intervention has emerged as a promising solution for patients with SSD, offering a non-invasive alternative to traditional air conduction hearing aids.
View Article and Find Full Text PDFJ Psycholinguist Res
January 2025
Department of Linguistics, University of Potsdam, Potsdam, Germany.
Rhythm perception in speech and non-speech acoustic stimuli has been shown to be affected by general acoustic biases as well as by phonological properties of the native language of the listener. The present paper extends the cross-linguistic approach in this field by testing the application of the iambic-trochaic law as an assumed general acoustic bias on rhythmic grouping of non-speech stimuli by speakers of three languages: Arabic, Hebrew and German. These languages were chosen due to relevant differences in their phonological properties on the lexical level alongside similarities on the phrasal level.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!