Despite extensive research on avian vocal learning, we still lack a general understanding of how and when this ability evolved in birds. As the closest living relatives of the earliest Passeriformes, the New Zealand wrens (Acanthisitti) hold a key phylogenetic position for furthering our understanding of the evolution of vocal learning because they share a common ancestor with two vocal learners: oscines and parrots. However, the vocal learning abilities of New Zealand wrens remain unexplored. Here, we test for the presence of prerequisite behaviors for vocal learning in one of the two extant species of New Zealand wrens, the rifleman (Acanthisitta chloris). We detect the presence of unique individual vocal signatures and show how these signatures are shaped by social proximity, as demonstrated by group vocal signatures and strong acoustic similarities among distantly related individuals in close social proximity. Further, we reveal that rifleman calls share similar phenotypic variance ratios to those previously reported in the learned vocalizations of the zebra finch, Taeniopygia guttata. Together these findings provide strong evidence that riflemen vocally converge, and though the mechanism still remains to be determined, they may also suggest that this vocal convergence is the result of rudimentary vocal learning abilities.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11096322 | PMC |
http://dx.doi.org/10.1038/s42003-024-06253-y | DOI Listing |
Learn Behav
January 2025
Dolphin Research Center, 58901 Overseas Highway, Grassy Key, FL, 33050, USA.
A recent study demonstrated that marmoset "phee calls" include information specific to the intended receiver of the call, and that receivers respond more to calls that are specifically directed at them. The authors interpret this as showing that these calls are name-like vocal labels for individual marmosets, but there is at least one other possibility that would equally explain these data.
View Article and Find Full Text PDFMed J Armed Forces India
December 2024
Associate Professor, Dayanand Sagar Univerity, Bengaluru, India.
Background: Vital information about a person's physical and emotional health can be perceived in their voice. After sleep loss, altered voice quality is noticed. The circadian rhythm controls the sleep cycle, and when it is askew, it results in fatigue, which is manifested in speech.
View Article and Find Full Text PDFBMC Neurosci
December 2024
Department of Medicine, The University of Chicago, 5841 S Maryland Ave, Chicago, IL, 60637, USA.
Background: Understanding the neural basis of behavior requires insight into how different brain systems coordinate with each other. Existing connectomes for various species have highlighted brain systems essential to various aspects of behavior, yet their application to complex learned behaviors remains limited. Research on vocal learning in songbirds has extensively focused on the vocal control network, though recent work implicates a variety of circuits in contributing to important aspects of vocal behavior.
View Article and Find Full Text PDFAm J Otolaryngol
December 2024
Department of Otorhinolaryngology Head and Neck Surgery, Tianjin First Central Hospital, Tianjin 300192, China; Institute of Otolaryngology of Tianjin, Tianjin, China; Key Laboratory of Auditory Speech and Balance Medicine, Tianjin, China; Key Clinical Discipline of Tianjin (Otolaryngology), Tianjin, China; Otolaryngology Clinical Quality Control Centre, Tianjin, China.
Purpose: To use deep learning technology to design and implement a model that can automatically classify laryngoscope images and assist doctors in diagnosing laryngeal diseases.
Materials And Methods: The experiment was based on 3057 images (normal, glottic cancer, granuloma, Reinke's Edema, vocal cord cyst, leukoplakia, nodules and polyps) from the dataset Laryngoscope8. A classification model based on deep neural networks was developed and tested.
Interspeech
September 2024
Pattern Recognition Lab. Friedrich-Alexander University, Erlangen, Germany.
Magnetic Resonance Imaging (MRI) allows analyzing speech production by capturing high-resolution images of the dynamic processes in the vocal tract. In clinical applications, combining MRI with synchronized speech recordings leads to improved patient outcomes, especially if a phonological-based approach is used for assessment. However, when audio signals are unavailable, the recognition accuracy of sounds is decreased when using only MRI data.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!