Despite extensive research on avian vocal learning, we still lack a general understanding of how and when this ability evolved in birds. As the closest living relatives of the earliest Passeriformes, the New Zealand wrens (Acanthisitti) hold a key phylogenetic position for furthering our understanding of the evolution of vocal learning because they share a common ancestor with two vocal learners: oscines and parrots. However, the vocal learning abilities of New Zealand wrens remain unexplored. Here, we test for the presence of prerequisite behaviors for vocal learning in one of the two extant species of New Zealand wrens, the rifleman (Acanthisitta chloris). We detect the presence of unique individual vocal signatures and show how these signatures are shaped by social proximity, as demonstrated by group vocal signatures and strong acoustic similarities among distantly related individuals in close social proximity. Further, we reveal that rifleman calls share similar phenotypic variance ratios to those previously reported in the learned vocalizations of the zebra finch, Taeniopygia guttata. Together these findings provide strong evidence that riflemen vocally converge, and though the mechanism still remains to be determined, they may also suggest that this vocal convergence is the result of rudimentary vocal learning abilities.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11096322PMC
http://dx.doi.org/10.1038/s42003-024-06253-yDOI Listing

Publication Analysis

Top Keywords

vocal learning
20
zealand wrens
16
social proximity
12
vocal
10
vocal convergence
8
passeriformes zealand
8
learning abilities
8
vocal signatures
8
learning
5
convergence social
4

Similar Publications

Do marmosets really have names?

Learn Behav

January 2025

Dolphin Research Center, 58901 Overseas Highway, Grassy Key, FL, 33050, USA.

A recent study demonstrated that marmoset "phee calls" include information specific to the intended receiver of the call, and that receivers respond more to calls that are specifically directed at them. The authors interpret this as showing that these calls are name-like vocal labels for individual marmosets, but there is at least one other possibility that would equally explain these data.

View Article and Find Full Text PDF

Harmonic-to-noise ratio as speech biomarker for fatigue: K-nearest neighbour machine learning algorithm.

Med J Armed Forces India

December 2024

Associate Professor, Dayanand Sagar Univerity, Bengaluru, India.

Background: Vital information about a person's physical and emotional health can be perceived in their voice. After sleep loss, altered voice quality is noticed. The circadian rhythm controls the sleep cycle, and when it is askew, it results in fatigue, which is manifested in speech.

View Article and Find Full Text PDF

Background: Understanding the neural basis of behavior requires insight into how different brain systems coordinate with each other. Existing connectomes for various species have highlighted brain systems essential to various aspects of behavior, yet their application to complex learned behaviors remains limited. Research on vocal learning in songbirds has extensively focused on the vocal control network, though recent work implicates a variety of circuits in contributing to important aspects of vocal behavior.

View Article and Find Full Text PDF

A non-local dual-stream fusion network for laryngoscope recognition.

Am J Otolaryngol

December 2024

Department of Otorhinolaryngology Head and Neck Surgery, Tianjin First Central Hospital, Tianjin 300192, China; Institute of Otolaryngology of Tianjin, Tianjin, China; Key Laboratory of Auditory Speech and Balance Medicine, Tianjin, China; Key Clinical Discipline of Tianjin (Otolaryngology), Tianjin, China; Otolaryngology Clinical Quality Control Centre, Tianjin, China.

Purpose: To use deep learning technology to design and implement a model that can automatically classify laryngoscope images and assist doctors in diagnosing laryngeal diseases.

Materials And Methods: The experiment was based on 3057 images (normal, glottic cancer, granuloma, Reinke's Edema, vocal cord cyst, leukoplakia, nodules and polyps) from the dataset Laryngoscope8. A classification model based on deep neural networks was developed and tested.

View Article and Find Full Text PDF

Magnetic Resonance Imaging (MRI) allows analyzing speech production by capturing high-resolution images of the dynamic processes in the vocal tract. In clinical applications, combining MRI with synchronized speech recordings leads to improved patient outcomes, especially if a phonological-based approach is used for assessment. However, when audio signals are unavailable, the recognition accuracy of sounds is decreased when using only MRI data.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!