Many animal vocalizations contain nonlinear acoustic phenomena as a consequence of physiological arousal. In humans, nonlinear features are processed early in the auditory system, and are used to efficiently detect alarm calls and other urgent signals. Yet, high-level emotional and semantic contextual factors likely guide the perception and evaluation of roughness features in vocal sounds. Here we examined the relationship between perceived vocal arousal and auditory context. We presented listeners with nonverbal vocalizations (yells of a single vowel) at varying levels of portrayed vocal arousal, in two musical contexts (clean guitar, distorted guitar) and one non-musical context (modulated noise). As predicted, vocalizations with higher levels of portrayed vocal arousal were judged as more negative and more emotionally aroused than the same voices produced with low vocal arousal. Moreover, both the perceived valence and emotional arousal of vocalizations were significantly affected by both musical and non-musical contexts. These results show the importance of auditory context in judging emotional arousal and valence in voices and music, and suggest that nonlinear features in music are processed similarly to communicative vocal signals.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.beproc.2020.104042DOI Listing

Publication Analysis

Top Keywords

vocal arousal
16
perceived vocal
8
nonlinear features
8
auditory context
8
levels portrayed
8
portrayed vocal
8
emotional arousal
8
vocal
7
arousal
7
sound context
4

Similar Publications

Nonverbal emotional vocalizations play a crucial role in conveying emotions during human interactions. Validated corpora of these vocalizations have facilitated emotion-related research and found wide-ranging applications. However, existing corpora have lacked representation from diverse cultural backgrounds, which may limit the generalizability of the resulting theories.

View Article and Find Full Text PDF

Effect of exogenous manipulation of glucocorticoid concentrations on meerkat heart rate, behaviour and vocal production.

Horm Behav

January 2025

Department of Evolutionary Biology and Environmental Studies, University of Zurich, Winterthurerstrasse 190, 8057 Zürich, Switzerland; Kalahari Meerkat Project, Kuruman River Reserve, Northern Cape, South Africa; Center for the Interdisciplinary Study of Language Evolution, ISLE, University of Zurich, Switzerland.

Encoding of emotional arousal in vocalisations is commonly observed in the animal kingdom, and provides a rapid means of information transfer about an individual's affective responses to internal and external stimuli. As a result, assessing affective arousal-related variation in the acoustic structure of vocalisations can provide insight into how animals perceive both internal and external stimuli, and how this is, in turn, communicated to con- or heterospecifics. However, the underlying physiological mechanisms driving arousal-related acoustic variation remains unclear.

View Article and Find Full Text PDF

Successful awake intubation using Airtraq in a low-resource setting for a patient with severe post-burn contractures.

BMC Anesthesiol

January 2025

Department of Anesthesiology, Pharmacology, Intensive Care and Emergency Medicine, University Hospitals of Geneva, Geneva, 1205, Switzerland.

Background: In resource-limited settings, advanced airway management tools like fiberoptic bronchoscopes are often unavailable, creating challenges for managing difficult airways. We present the case of a 25-year-old male with post-burn contractures of the face, neck, and thorax in Nigeria, who had been repeatedly denied surgery due to the high risk of airway management complications. This case highlights how an awake intubation was safely performed using an Airtraq laryngoscope, the only device available, as fiberoptic intubation was not an option.

View Article and Find Full Text PDF

Attention-Based PSO-LSTM for Emotion Estimation Using EEG.

Sensors (Basel)

December 2024

Department of Information and Electronic Engineering, International Hellenic University, 57001 Thessaloniki, Greece.

Recent advances in emotion recognition through Artificial Intelligence (AI) have demonstrated potential applications in various fields (e.g., healthcare, advertising, and driving technology), with electroencephalogram (EEG)-based approaches demonstrating superior accuracy compared to facial or vocal methods due to their resistance to intentional manipulation.

View Article and Find Full Text PDF

Facial mimicry of visually observed emotional facial actions is a robust phenomenon. Here, we examined whether such facial mimicry extends to auditory emotional stimuli. We also examined if participants' facial responses differ to sounds that are more strongly associated with congruent facial movements, such as vocal emotional expressions (e.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!