Hearing loss prevents vocal learning and causes learned vocalizations to deteriorate, but how vocalization-related auditory feedback acts on neural circuits that control vocalization remains poorly understood. We deafened adult zebra finches, which rely on auditory feedback to maintain their learned songs, to test the hypothesis that deafening modifies synapses on neurons in a sensorimotor nucleus important to song production. Longitudinal in vivo imaging revealed that deafening selectively decreased the size and stability of dendritic spines on neurons that provide input to a striatothalamic pathway important to audition-dependent vocal plasticity, and changes in spine size preceded and predicted subsequent vocal degradation. Moreover, electrophysiological recordings from these neurons showed that structural changes were accompanied by functional weakening of both excitatory and inhibitory synapses, increased intrinsic excitability, and changes in spontaneous action potential output. These findings shed light on where and how auditory feedback acts within sensorimotor circuits to shape learned vocalizations.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3299981PMC
http://dx.doi.org/10.1016/j.neuron.2011.12.038DOI Listing

Publication Analysis

Top Keywords

learned vocalizations
12
auditory feedback
12
dendritic spines
8
sensorimotor nucleus
8
feedback acts
8
deafening drives
4
drives cell-type-specific
4
changes
4
cell-type-specific changes
4
changes dendritic
4

Similar Publications

EXPRESS: Vocal and musical emotion perception, voice cue discrimination, and quality of life in cochlear implant users with and without acoustic hearing.

Q J Exp Psychol (Hove)

January 2025

Department of Otorhinolaryngology / Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.

This study aims to provide a comprehensive picture of auditory emotion perception in cochlear implant (CI) users by (1) investigating emotion categorization in both vocal (pseud-ospeech) and musical domains, and (2) how individual differences in residual acoustic hearing, sensitivity to voice cues (voice pitch, vocal tract length), and quality of life (QoL) might be associated with vocal emotion perception, and, going a step further, also with musical emotion perception. In 28 adult CI users, with or without self-reported acoustic hearing, we showed that sensitivity (d') scores for emotion categorization varied largely across the participants, in line with previous research. However, within participants, the d' scores for vocal and musical emotion categorization were significantly correlated, indicating similar processing of auditory emotional cues across the pseudo-speech and music domains and robustness of the tests.

View Article and Find Full Text PDF

Do goats recognise humans cross-modally?

PeerJ

January 2025

Department of Infectious Diseases and Public Health, Jockey Club College of Veterinary Medicine and Life Sciences, City University of Hong Kong, Hong Kong, Hong Kong SAR, China.

Recognition plays a key role in the social lives of gregarious species, enabling animals to distinguish among social partners and tailor their behaviour accordingly. As domesticated animals regularly interact with humans, as well as members of their own species, we might expect mechanisms used to discriminate between conspecifics to also apply to humans. Given that goats can combine visual and vocal cues to recognise one another, we investigated whether this cross-modal recognition extends to discriminating among familiar humans.

View Article and Find Full Text PDF

Automated segmentation of child-clinician speech in naturalistic clinical contexts.

Res Dev Disabil

January 2025

Laboratory of Observation, Diagnosis, and Education, Department of Psychology and Cognitive Science - University of Trento, Via Matteo del Ben, 5B, Rovereto, TN 38068, Italy. Electronic address:

Background: Computational approaches hold significant promise for enhancing diagnosis and therapy in child and adolescent clinical practice. Clinical procedures heavily depend n vocal exchanges and interpersonal dynamics conveyed through speech. Research highlights the importance of investigating acoustic features and dyadic interactions during child development.

View Article and Find Full Text PDF

The accurate and reliable performance of learned vocalizations (e.g., speech and birdsong) modulates the efficacy of communication in humans and songbirds.

View Article and Find Full Text PDF

The development of deep convolutional generative adversarial network to synthesize odontocetes' clicks.

J Acoust Soc Am

January 2025

Key Laboratory of Underwater Acoustic Communication and Marine Information Technology of the Ministry of Education, College of Ocean and Earth Sciences, Xiamen University, Xiamen 361005, China.

Odontocetes are capable of dynamically changing their echolocation clicks to efficiently detect targets, and learning their clicking strategy can facilitate the design of man-made detecting signals. In this study, we developed deep convolutional generative adversarial networks guided by an acoustic feature vector (AF-DCGANs) to synthesize narrowband clicks of the finless porpoise (Neophocaena phocaenoides sunameri) and broadband clicks of the bottlenose dolphins (Tursiops truncatus). The average short-time objective intelligibility (STOI), spectral correlation coefficient (Spe-CORR), waveform correlation coefficient (Wave-CORR), and dynamic time warping distance (DTW-Distance) of the synthetic clicks were 0.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!