Animal acoustic communication often takes the form of complex sequences, made up of multiple distinct acoustic units. Apart from the well-known example of birdsong, other animals such as insects, amphibians, and mammals (including bats, rodents, primates, and cetaceans) also generate complex acoustic sequences. Occasionally, such as with birdsong, the adaptive role of these sequences seems clear (e.g. mate attraction and territorial defence). More often however, researchers have only begun to characterise - let alone understand - the significance and meaning of acoustic sequences. Hypotheses abound, but there is little agreement as to how sequences should be defined and analysed. Our review aims to outline suitable methods for testing these hypotheses, and to describe the major limitations to our current and near-future knowledge on questions of acoustic sequences. This review and prospectus is the result of a collaborative effort between 43 scientists from the fields of animal behaviour, ecology and evolution, signal processing, machine learning, quantitative linguistics, and information theory, who gathered for a 2013 workshop entitled, 'Analysing vocal sequences in animals'. Our goal is to present not just a review of the state of the art, but to propose a methodological framework that summarises what we suggest are the best practices for research in this field, across taxa and across disciplines. We also provide a tutorial-style introduction to some of the most promising algorithmic approaches for analysing sequences. We divide our review into three sections: identifying the distinct units of an acoustic sequence, describing the different ways that information can be contained within a sequence, and analysing the structure of that sequence. Each of these sections is further subdivided to address the key questions and approaches in that area. We propose a uniform, systematic, and comprehensive approach to studying sequences, with the goal of clarifying research terms used in different fields, and facilitating collaboration and comparative studies. Allowing greater interdisciplinary collaboration will facilitate the investigation of many important questions in the evolution of communication and sociality.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4444413 | PMC |
http://dx.doi.org/10.1111/brv.12160 | DOI Listing |
J Psycholinguist Res
January 2025
Department of Linguistics, University of Potsdam, Potsdam, Germany.
Rhythm perception in speech and non-speech acoustic stimuli has been shown to be affected by general acoustic biases as well as by phonological properties of the native language of the listener. The present paper extends the cross-linguistic approach in this field by testing the application of the iambic-trochaic law as an assumed general acoustic bias on rhythmic grouping of non-speech stimuli by speakers of three languages: Arabic, Hebrew and German. These languages were chosen due to relevant differences in their phonological properties on the lexical level alongside similarities on the phrasal level.
View Article and Find Full Text PDFJ Commun Disord
December 2024
Department of Communication Sciences and Disorders, University of Wisconsin - Eau Claire, Human Sciences and Services 127, 239 Water Street, Eau Claire, Wisconsin 54703, United States. Electronic address:
Purpose: The aim of the current study is to examine if the relationship among three semivowel sounds (/l, ɹ, w/) and between the semivowel and the following vowel differs by children's overall speech proficiency, and if this relationship affects listeners' perceptual judgment of the liquid sounds (/l, ɹ/). The acoustic proximity among the three semivowel sounds and the acoustic characteristics of the following vowel sounds were examined by each child speaker's overall speech sound proficiency and their semivowel accuracy.
Methods: A total of 21 monolingual English-speaking children with and without speech sound disorders produced monosyllabic words that include target semivowel sounds in word-initial position in different vowel contexts.
Braz J Otorhinolaryngol
January 2025
Shanghai Jiao Tong University, School of Medicine, Hainan Branch of Shanghai Children's Medical Center, Department of Otorhinolaryngology, Sanya, China; Shanghai Jiao Tong University, School of Medicine, Shanghai Children's Medical Center, Department of Otorhinolaryngology, Shanghai, China. Electronic address:
Objective: We aimed to investigate the correlation between prevalent risk factors for high-risk neonates in neonatal intensive care unit and their hearing loss, and to examine the audiological features and genetic profiles associated with different deafness mutations in our tertiary referral center. This research seeks to deepen our understanding of the etiology behind congenital hearing loss.
Methods: We conducted initial hearing screenings, including automated auditory brainstem response, distortion product otoacoustic emission, and acoustic immittance on 443 high-risk neonates within 7 days after birth and 42 days (if necessary) after birth.
Ecotoxicol Environ Saf
December 2024
Department of Biological Sciences, Clemson University, Clemson, SC, USA. Electronic address:
PLoS One
December 2024
Department of Spanish Philology, University of Málaga, Málaga, Spain.
Nasalance is a valuable clinical biomarker for hypernasality. It is computed as the ratio of acoustic energy emitted through the nose to the total energy emitted through the mouth and nose (eNasalance). A new approach is proposed to compute nasalance using Convolutional Neural Networks (CNNs) trained with Mel-Frequency Cepstrum Coefficients (mfccNasalance).
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!