The ability to recognize emotions undergoes major developmental changes from infancy to adolescence, peaking in early adulthood, and declining with aging. A life span approach to emotion recognition is lacking in the auditory domain, and it remains unclear how the speaker's and listener's ages interact in the context of decoding vocal emotions. Here, we examined age-related differences in vocal emotion recognition from childhood until older adulthood and tested for a potential own-age bias in performance. A total of 164 participants (36 children [7-11 years], 53 adolescents [12-17 years], 48 young adults [20-30 years], 27 older adults [58-82 years]) completed a forced-choice emotion categorization task with nonverbal vocalizations expressing pleasure, relief, achievement, happiness, sadness, disgust, anger, fear, surprise, and neutrality. These vocalizations were produced by 16 speakers, 4 from each age group (children [8-11 years], adolescents [14-16 years], young adults [19-23 years], older adults [60-75 years]). Accuracy in vocal emotion recognition improved from childhood to early adulthood and declined in older adults. Moreover, patterns of improvement and decline differed by emotion category: faster development for pleasure, relief, sadness, and surprise and delayed decline for fear and surprise. Vocal emotions produced by older adults were more difficult to recognize when compared to all other age groups. No evidence for an own-age bias was found, except in children. These findings support effects of both speaker and listener ages on how vocal emotions are decoded and inform current models of vocal emotion perception. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Download full-text PDF

Source
http://dx.doi.org/10.1037/emo0000692DOI Listing

Publication Analysis

Top Keywords

vocal emotion
16
emotion recognition
16
older adults
16
vocal emotions
12
life span
8
early adulthood
8
own-age bias
8
years]
8
years] adolescents
8
years] young
8

Similar Publications

Elephant Sound Classification Using Deep Learning Optimization.

Sensors (Basel)

January 2025

School of Computer Science and Informatics, Cardiff University, Cardiff CF24 3AA, UK.

Elephant sound identification is crucial in wildlife conservation and ecological research. The identification of elephant vocalizations provides insights into the behavior, social dynamics, and emotional expressions, leading to elephant conservation. This study addresses elephant sound classification utilizing raw audio processing.

View Article and Find Full Text PDF

Dogs engage in social interactions with robots, yet whether they perceive them as social agents remains uncertain. In jealousy-evoking contexts, specific behaviours were observed exclusively when dogs' owners interacted with social, rather than non-social rivals. Here, we investigated whether a robot elicits jealous behaviour in dogs based on its level of animateness.

View Article and Find Full Text PDF

Introduction: Children with neurodevelopmental disabilities (NDs) display several developmental impairments across various domains that impact parent-child interactions, emphasizing the need for effective early interventions. This multi-centric study aimed to evaluate the impact of video-feedback intervention (VFI) on enhancing maternal behavior (i.e.

View Article and Find Full Text PDF

Ultrasonic vocalisations in the Flinders Sensitive Line rat, a genetic animal model of depression.

Acta Neuropsychiatr

January 2025

Translational Neuropsychiatry Unit, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark.

Objective: Ultrasonic vocalisations (USVs) emitted by rats may reflect affective states. Specifically, 50 kHz calls emitted during juvenile playing are associated with positive affect. Given that depression is characterised by profound alterations in this domain, we proposed that USV calls may configure a suitable tool for assessing depressive-like states.

View Article and Find Full Text PDF

Nonverbal emotional vocalizations play a crucial role in conveying emotions during human interactions. Validated corpora of these vocalizations have facilitated emotion-related research and found wide-ranging applications. However, existing corpora have lacked representation from diverse cultural backgrounds, which may limit the generalizability of the resulting theories.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!