Vocal Learning via Social Reinforcement by Infant Marmoset Monkeys.

Curr Biol

Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Department of Psychology, Princeton University, Princeton, NJ 08544, USA; Department of Ecology and Evolutionary Biology, Princeton University, Princeton, NJ 08544, USA. Electronic address:

Published: June 2017

For over half a century now, primate vocalizations have been thought to undergo little or no experience-dependent acoustic changes during development [1]. If any changes are apparent, then they are routinely (and quite reasonably) attributed to the passive consequences of growth. Indeed, previous experiments on squirrel monkeys and macaque monkeys showed that social isolation [2, 3], deafness [2], cross-fostering [4] and parental absence [5] have little or no effect on vocal development. Here, we explicitly test in marmoset monkeys-a very vocal and cooperatively breeding species [6]-whether the transformation of immature into mature contact calls by infants is influenced by contingent parental vocal feedback. Using a closed-loop design, we experimentally provided more versus less contingent vocal feedback to twin infant marmoset monkeys over their first 2 months of life, the interval during which their contact calls transform from noisy, immature calls to tonal adult-like "phee" calls [7, 8]. Infants who received more contingent feedback had a faster rate of vocal development, producing mature-sounding contact calls earlier than the other twin. The differential rate of vocal development was not linked to genetics, perinatal experience, or body growth; nor did the amount of contingency influence the overall rate of spontaneous vocal production. Thus, we provide the first experimental evidence for production-related vocal learning during the development of a nonhuman primate.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.cub.2017.05.004DOI Listing

Publication Analysis

Top Keywords

vocal development
12
contact calls
12
vocal
9
vocal learning
8
infant marmoset
8
marmoset monkeys
8
calls infants
8
vocal feedback
8
rate vocal
8
development
5

Similar Publications

EXPRESS: Vocal and musical emotion perception, voice cue discrimination, and quality of life in cochlear implant users with and without acoustic hearing.

Q J Exp Psychol (Hove)

January 2025

Department of Otorhinolaryngology / Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.

This study aims to provide a comprehensive picture of auditory emotion perception in cochlear implant (CI) users by (1) investigating emotion categorization in both vocal (pseud-ospeech) and musical domains, and (2) how individual differences in residual acoustic hearing, sensitivity to voice cues (voice pitch, vocal tract length), and quality of life (QoL) might be associated with vocal emotion perception, and, going a step further, also with musical emotion perception. In 28 adult CI users, with or without self-reported acoustic hearing, we showed that sensitivity (d') scores for emotion categorization varied largely across the participants, in line with previous research. However, within participants, the d' scores for vocal and musical emotion categorization were significantly correlated, indicating similar processing of auditory emotional cues across the pseudo-speech and music domains and robustness of the tests.

View Article and Find Full Text PDF

Communication sound processing in mouse AC is lateralized. Both left and right AC are highly specialised and differ in auditory stimulus representation, functional connectivity and field topography. Previous studies have highlighted intracortical functional circuits that explain hemispheric stimulus preference.

View Article and Find Full Text PDF

Automated segmentation of child-clinician speech in naturalistic clinical contexts.

Res Dev Disabil

January 2025

Laboratory of Observation, Diagnosis, and Education, Department of Psychology and Cognitive Science - University of Trento, Via Matteo del Ben, 5B, Rovereto, TN 38068, Italy. Electronic address:

Background: Computational approaches hold significant promise for enhancing diagnosis and therapy in child and adolescent clinical practice. Clinical procedures heavily depend n vocal exchanges and interpersonal dynamics conveyed through speech. Research highlights the importance of investigating acoustic features and dyadic interactions during child development.

View Article and Find Full Text PDF

Several studies have demonstrated that the severity of social communication problems, a core symptom of Autism Spectrum Disorder (ASD), is correlated with specific speech characteristics of ASD individuals. This suggests that it may be possible to develop speech analysis algorithms that can quantify ASD symptom severity from speech recordings in a direct and objective manner. Here we demonstrate the utility of a new open-source AI algorithm, ASDSpeech, which can analyze speech recordings of ASD children and reliably quantify their social communication difficulties across multiple developmental timepoints.

View Article and Find Full Text PDF

Vocal Health in SLPs: Easier Said Than Done.

J Voice

January 2025

Graduate School, Department of Speech and Language Therapy, Anadolu University, Eskişehir, Türkiye. Electronic address:

Objectives: As professional voice users, speech and language pathologists (SLPs) follow vocal hygiene behaviors both in the rehabilitation of voice disorders and in preventive interventions to reduce the risk among healthy users. However, it is curious to what extent SLPs adhere to vocal hygiene and healthy vocal behaviors and how this affects vocal fatigue. This study aims to investigate the extent to which SLPs perform vocal hygiene behaviors, their levels of vocal hygiene, and vocal fatigue.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!