In adult normal-hearing musicians, perception of music, vocal emotion, and speech in noise has been previously shown to be better than non-musicians, sometimes even with spectro-temporally degraded stimuli. In this study, melodic contour identification, vocal emotion identification, and speech understanding in noise were measured in young adolescent normal-hearing musicians and non-musicians listening to unprocessed or degraded signals. Different from adults, there was no musician effect for vocal emotion identification or speech in noise. Melodic contour identification with degraded signals was significantly better in musicians, suggesting potential benefits from music training for young cochlear-implant users, who experience similar spectro-temporal signal degradations.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1121/1.5034489 | DOI Listing |
Q J Exp Psychol (Hove)
January 2025
Department of Otorhinolaryngology / Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
This study aims to provide a comprehensive picture of auditory emotion perception in cochlear implant (CI) users by (1) investigating emotion categorization in both vocal (pseud-ospeech) and musical domains, and (2) how individual differences in residual acoustic hearing, sensitivity to voice cues (voice pitch, vocal tract length), and quality of life (QoL) might be associated with vocal emotion perception, and, going a step further, also with musical emotion perception. In 28 adult CI users, with or without self-reported acoustic hearing, we showed that sensitivity (d') scores for emotion categorization varied largely across the participants, in line with previous research. However, within participants, the d' scores for vocal and musical emotion categorization were significantly correlated, indicating similar processing of auditory emotional cues across the pseudo-speech and music domains and robustness of the tests.
View Article and Find Full Text PDFSci Rep
January 2025
Department of Psychology, New York University, New York, NY, USA.
Music can evoke powerful emotions in listeners. However, the role that instrumental music (music without any vocal part) plays in conveying extra-musical meaning, above and beyond emotions, is still a debated question. We conducted a study wherein participants (N = 121) listened to twenty 15-second-long excerpts of polyphonic instrumental soundtrack music and reported (i) perceived emotions (e.
View Article and Find Full Text PDFHorm Behav
January 2025
Department of Evolutionary Biology and Environmental Studies, University of Zurich, Winterthurerstrasse 190, 8057 Zürich, Switzerland; Kalahari Meerkat Project, Kuruman River Reserve, Northern Cape, South Africa; Center for the Interdisciplinary Study of Language Evolution, ISLE, University of Zurich, Switzerland.
Encoding of emotional arousal in vocalisations is commonly observed in the animal kingdom, and provides a rapid means of information transfer about an individual's affective responses to internal and external stimuli. As a result, assessing affective arousal-related variation in the acoustic structure of vocalisations can provide insight into how animals perceive both internal and external stimuli, and how this is, in turn, communicated to con- or heterospecifics. However, the underlying physiological mechanisms driving arousal-related acoustic variation remains unclear.
View Article and Find Full Text PDFOpen Res Eur
January 2025
Center for Innovative Research and Liaison, Wakayama University, Wakayama, Wakayama Prefecture, Japan.
The purpose of this paper is to make easily available to the scientific community an efficient voice morphing tool called STRAIGHTMORPH and provide a short tutorial on its use with examples. STRAIGHTMORPH consists of a set of Matlab functions allowing the generation of high-quality, parametrically-controlled morphs of an arbitrary number of voice samples. A first step consists in extracting an 'mObject' for each voice sample, with accurate tracking of the fundamental frequency contour and manual definition of Time and Frequency anchors corresponding across samples to be morphed.
View Article and Find Full Text PDFPLoS One
January 2025
Computer Engineering, CCSIT, King Faisal University, Al Hufuf, Kingdom of Saudi Arabia.
The health of poultry flock is crucial in sustainable farming. Recent advances in machine learning and speech analysis have opened up opportunities for real-time monitoring of the behavior and health of flock. However, there has been little research on using Tiny Machine Learning (Tiny ML) for continuous vocalization monitoring in poultry.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!