Pitch, our perception of how high or low a sound is on a musical scale, is a fundamental perceptual attribute of sounds and is important for both music and speech. After more than a century of research, the exact mechanisms used by the auditory system to extract pitch are still being debated. Theoretically, pitch can be computed using either spectral or temporal acoustic features of a sound. We have investigated how cues derived from the temporal envelope and spectrum of an acoustic signal are used for pitch extraction in the common marmoset (Callithrix jacchus), a vocal primate species, by measuring pitch discrimination behaviorally and examining pitch-selective neuronal responses in auditory cortex. We find that pitch is extracted by marmosets using temporal envelope cues for lower pitch sounds composed of higher-order harmonics, whereas spectral cues are used for higher pitch sounds with lower-order harmonics. Our data support dual-pitch processing mechanisms, originally proposed by psychophysicists based on human studies, whereby pitch is extracted using a combination of temporal envelope and spectral cues.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3752143 | PMC |
http://dx.doi.org/10.1523/JNEUROSCI.2563-12.2012 | DOI Listing |
J Neurosci
January 2025
Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, 20742
When we listen to speech, our brain's neurophysiological responses "track" its acoustic features, but it is less well understood how these auditory responses are enhanced by linguistic content. Here, we recorded magnetoencephalography (MEG) responses while subjects of both sexes listened to four types of continuous-speech-like passages: speech-envelope modulated noise, English-like non-words, scrambled words, and a narrative passage. Temporal response function (TRF) analysis provides strong neural evidence for the emergent features of speech processing in cortex, from acoustics to higher-level linguistics, as incremental steps in neural speech processing.
View Article and Find Full Text PDFValue Health Reg Issues
January 2025
Departamento de Ingeniería Informática, Facultad de Ingeniería, Universidad de Santiago de Chile, Santiago, Chile.
Objectives: Despite the increasing investments in Latin American healthcare, the corresponding improvement in population health is not proportional. This discrepancy may be attributed to the efficiency of resource utilization. This study used the data envelopment analysis (DEA) methodology to assess the efficiency of healthcare systems in 23 Latin American and Caribbean countries.
View Article and Find Full Text PDFSci Rep
January 2025
School of Business Administration / Research Center for Energy Economics, Henan Polytechnic University, Jiaozuo, Henan, 454003, China.
Understanding the evolution of low-carbon efficiency in urban built-up areas is essential for developing countries striving to meet sustainable development goals. However, the mechanisms driving low-carbon efficiency and the associated development pathways remain underexplored. This study applies the Global Data Envelopment Analysis (DEA) model, the Global Malmquist-Luenberger Index, and econometric models to evaluate low-carbon efficiency and its determinants across China's urban built-up areas from 2010 to 2022.
View Article and Find Full Text PDFeNeuro
January 2025
Hearing Technology @ WAVES, Department of Information Technology, Ghent University, Technologiepark 216, 9052 Zwijnaarde, Belgium
Speech intelligibility declines with age and sensorineural hearing damage (SNHL). However, it remains unclear whether cochlear synaptopathy (CS), a recently discovered form of SNHL, significantly contributes to this issue. CS refers to damaged auditory-nerve synapses that innervate the inner hair cells and there is currently no go-to diagnostic test available.
View Article and Find Full Text PDFSci Rep
December 2024
Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, 15260, USA.
Multi-talker speech intelligibility requires successful separation of the target speech from background speech. Successful speech segregation relies on bottom-up neural coding fidelity of sensory information and top-down effortful listening. Here, we studied the interaction between temporal processing measured using Envelope Following Responses (EFRs) to amplitude modulated tones, and pupil-indexed listening effort, as it related to performance on the Quick Speech-in-Noise (QuickSIN) test in normal-hearing adults.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!